Thursday, December 1, 2011

Richmond Times Dispatch - 2011 state employee salary database - AGAIN!

I received this email from the RTD:
The Times-Dispatch and TimesDispatch.com will publish our annual report on state employee salaries this Sunday, Dec. 4. This year's searchable database includes salaries for all 104,554 state employees, including names and positions for all employees who earn above the state's average of $52,547.
They did the same thing last year.  Here is how I felt last year when I was a state employee caught up in this database:


On LinkedIn.com you refer to your company as “the leading provider of high-quality news…in central Virginia.”  I think at best that claim is suspect, and at worst, you are providing false panoply of episodic capabilities to the public, to hide the depth of your desperation to remain a viable news source.  The Richmond Times-Dispatch has a few redeeming qualities, but taken in aggregate you offer very little to your customers that they could not more freely obtain from a multitude of other online outlets.  There are times when publishing FOIA information is warranted; however, this is not one of them, at least not in its current form.  It is especially unwarranted when the purpose is to fuel a negative agenda.  In economically challenging times, Federal and State employees are easy targets for efficiency and effectiveness ratings.  However, I submit that there are many employees that work very hard and that would and could make more money should they move to the private sector and leave public service behind.  Your irresponsible action in publishing the salaries of state employees, SEARCHABLE BY NAME, smacks of arrogance and sensationalism and is bereft of any dignity or integrity that your newspaper claims to still retain.  As far as I am concerned, my name in the context of my agency and position is “Personally Identifiable Information” and as such should not be used. 

The damage of your actions far outweighs the value delivered to your readers and the general public.  There is absolutely no constructive reasoning that justifies listing employees names in your database.  From your actions it is clear that you completely disregarded the negative impact that such a database would have on state employees.  These same state employees are also your customers.  With this contemptible action, your organization is morally and ethically bankrupt, and you owe your readers an apology for wasting their time with such twaddle.

I would be very interested to know the salaries of each of your employees making over $50K, as well as their job titles. And of course, I would need the names of said employees to more easily target my public disdain for your obviously poor value proposition and the gross mismanagement of your financial resources.  If that makes no sense to you, then you now realize the situation in which thousands of state employees find themselves, without redress.  I am sure that your employees would not mind; after all, your customers are paying you for your information services.  Do they not have the right to get the best value for their money?  Are not the salaries of your employees just more information about which someone, somewhere would want to read?  Is that not how you choose to publish these days?  Of course you need not answer these rhetorical questions; your actions speak volumes about your motivations.
Just because you can, should not be a license to pursue a course of action that has no real probative or news value and genuinely provides a divisive tool for someone’s personal agenda.  Your misguided exploit is completely irresponsible and in my opinion an abuse of said state employees, your customers, and the spirit of the Freedom of Information Act.  It is also an abuse of your power and a misuse of your first amendment rights.  Furthermore, your actions underpin my long held belief that nothing useful comes from your disgraceful and at times, Yellow-Journalistic tabloid.  You may have temporarily boosted your circulation or at best your web site traffic, but you have undoubtedly isolated yet another large customer segment and undermined the public’s trust.  Moreover, if more of your readers were affected by this debauched deed you would undoubtedly apologize or even redact said folly.  Indeed, you misunderstand, in entirety, the meaning of cultivating long-term customer value and loyalty.  This entire episode has been a tactless and colossal waste of our time and attention.  You may be the press, but you are also a business.  I think you should cease acting as the self-anointed herald of worthless hokum and actually cover creditable news that makes our lives better and provides at least a modicum of value to your customers.  However, if you are disinclined to act appropriately, one can only hope that your readers recognize how shallow, foul, and jaded you have become and move on to better outlets of information.  For myself, I am almost giddy with the thought that I do not have a subscription to your worthless paper.  I now find it less than suitable to line the bottom of my dog’s crate!

Thursday, November 10, 2011

JEE Developers: Embrace the Polyglot in You

Are you a polyglot programmer?  The term almost has a guttural pronunciation.  In short, a polyglot person has the ability to speak multiple languages.  So, a polyglot programmer is one who can write code in multiple languages.  Most programmers I know are polyglots out of necessity or career evolution.  I started out with BASIC, then moved to LabVIEW, HTML, Visual Basic and even Lotus Notes, settling into Java and its cohorts later on.  Along the way I wrote C and C++ when the need arose, as well as the suite of UNIX/Linux scripting languages (Perl, Korn, Python, etc.).

Most of us have started out in one language only to switch later to more modern, and/or more expressive languages.  Occasionally we may digress, if the situation calls for older skills, but in general we have moved forward, building on previous knowledge.  I can hear some of you saying right now that true expertise takes years to build.  I can agree with that, to a degree.  It should take years to become an expert in some languages or technologies.  However, expertise is a relative term, and it should not take decades.  Furthermore, most projects don't staff all experts; there is room for intermediate levels as well.

There are those that have stayed with older languages, for whatever reasons; comfort not being the least of these.  However, it seems to be harder for those individuals to adapt to modern needs, and that makes them less desirable to recruiters.  To remain relevant, we need to stretch outside our comfort zones.  Stories are legion these days of folks searching for work, only to find out that their skills are simply outdated.  When I hire, I try to hire for organizational fit first and then skills second.  However, in this economy, recruiters are buyers in a buyer's market.  Some would say, that without the correct buzzwords on resumes or applications, otherwise competent resources are getting overlooked and ignored.  I would say that the skills of these programmers have become less relevant.  As one of my friends would say, "They may have placed themselves in a position to be unlucky."  If your skills are no longer relevant to the strategy of your company or its competitive advantage, you may be unlucky sooner than you would like.

I say embrace the polyglot way.  A good software architect chooses the right systems or layers for the right processing.  Polyglots are not only able to understand the code, they frequently write the code in those different systems or layers.  For example, in Oracle, if you need to process large data sets in a iterative and procedural manner, use PL/SQL.  PL/SQL also gives you better application management and performance, if done correctly.  Do high-level field translation and validation in the HTML form or view layer with JavaScript to enhance the UX.  Do more in-depth data validation in the business layer for data quality and security reasons.

As a software developer you can ill afford to dedicate your focus on one language while ignoring the rest.  Especially if your language (or framework) has complementary cohorts that are used routinely in application implementation.  If you call yourself a JEE developer, then you already have a lot of Java APIs to master.  On top of the JEE technologies, you should also know Spring and Hibernate, intimately, including Spring MVC and EL as well as HQL.  Groovy is also a good language to know especially if your web application needs dynamic integration, rapid prototyping, and/or can leverage ECMS.

You'd better know SQL, and if you work on Oracle projects, know PL/SQL.  If you are writing JEE web applications, know HTML, CSS, and JavaScript.  I am not saying that backend developers have to be frontend experts, but one should know what goes on over the fence from them.  In fact, for all networked applications, know the protocols you would be using (TCP/IP, HTTP/HTTPS, etc).

If your applications are deployed in a UNIX/Linux environment, know scripting languages.  For view-layer integration, know AJAX and JSON.  Finally, almost nothing is possible without XML.  Do you know what it means to be well-formed and valid?  Do you know DTDs and XSDs?  Could you translate XML to something else with XSL/XSLT?  If you are not developing some sort of web service or other integration with XML you most likely have XML configuration files in your application.

In short don't limit yourself when applying for positions or choosing projects because you only focus on Java or .Net.  Know the cohort technologies as well.  You don't need to be an expert in all things, just as an IT generalist is also in low demand.  You need to balance depth with width.  Focus on core technologies, and then be familiar with complementary technologies.  Develop more depth in the non-core technologies when you have sufficient comfort level in the core.  This is not fake-it-to-you-make-it either.  From time to time you will have the opportunity to be a temporary expert in more ancillary technologies or languages or frameworks.  You may even develop a temporary monopoly in new technologies or approaches.  That is where real value is added to your organization.  Even if you can get projects with your older skills, will you be happy with those projects?

If you know enough about the side technologies to spin-up quickly, that should be enough for most projects.  Scope the landscape for relevant technologies and then choose wisely when looking to enhance your capabilities.  In my experience, it is easier to keep current in these technologies if they are complementary, and if you have the chance to use them for real deliverables.  Don't place yourself in a position to be "unlucky".  Be cognizant of the relationship between your skills and the strategy of your organization and the direction in which you wish your career to proceed.  Embracing the polyglot approach can differentiate you from your contemporaries, making you more relevant to your organization and the job market.

Friday, October 7, 2011

PMI CVC PMP 2011 Fall Workshop - Framework Talk

I will be presenting on October 15 for the PMI Central Va Chapter during their Fall 2011 PMI Certification Workshop.  My topic will be the Project Management Framework.  My talk goes from 10:00 AM to 11:15 AM and touches on items found in sections 1 & 2, chapters 1, 2, & 3, of the PMBOK, version 4.

I have been presenting this topic for the PMI CVC for about a year.  In my session I will be discussing these topics and more:
  • Projects, Portfolios, and Program
  • Process Groups, Knowledge Areas, and Processes
  • PMO
  • Project Life Cycle vs. Product Life Cycle
  • Stakeholder Management
  • Organizational Structure
Ever wondered how PMI keeps the PMP exam current and relevant?  This year we have also added information of the PMI Role Delineation Study and the Crosswalk.

This workshop is a great way to come up to speed for the PMP exam as well as gain valuable study tips from fellow project managers that have already passed the exam.  The workshop is also a great opportunity to gain the PDUs needed to maintain existing PMP certifications.  Best of all, attendees receive copies of all the slides presented at the workshop as well as other resources to help them study for the exam.

Monday, September 26, 2011

Greed is a Terrible Thing - Goodbye to the SCEA

Industry pundits have speculated on Oracle's ability to be stewards for Java.  Since when does a good technology steward prevent adoption of said technology?  Can you say greedy-failure?

I tried to register for the Oracle Certified Master, Java EE 5 Enterprise Architect (formerly SCEA) certification today.  After the initial sticker shock of $900 for the cert, I soon realized that Oracle has increased the price by making it mandatory to take one of their training classes in order to gain this certification. I guess they do this for several of their other certifications. That is another several thousand or so.  Mind the reader, they are not requiring that you take all the courses that would teach you about the different technology concerns in the architecture certification.  So, where is the value of making candidates take a class if they can choose the simple "Web Component Development with Servlets & JSPs" or "Java Programming Language"?  What architect cannot write servlets and JSPs, or not understand their positioning?  What architect does not know the difference between HashMap and a Hashtable?

In my opinion, this is a blatant act of greed and a splendid display of stupidity.  Oracle (and its mindless sycophants) can certainly argue that this move strengthens the value of this certification.  Utter twaddle-speak says I!  Pricing a certification to make it unreachable does not make it more valuable to the industry in aggregate; it just makes it unreachable.  This benefits no one, but Oracle and those already certified. 

I (along with several contemporaries) believe this will severely limit the certification to those of us not already engaged in austerity measures (self imposed or not), brought on by the poor economy.  Are those the boundaries that Oracle was actually hoping for?  With this callous, and extremely short-sighted folly, they have effectively isolated a segment of Java architects not backed financially by an organization willing to pay the ridiculous expense.  They have created a class of Java technologists based solely on monetary funds available to the technologist.  Way to go Oracle!

Instead of making the exam more valuable by making it harder to pass (currently you need a 57% to pass the "multiple-guess" exam, part 1 of the exam trilogy) , they seem to be trying to make it more valuable by placing it out of reach, monetarily.  Technology is supposed to facilitate the crossing of economic boundaries, not put in place those of its own making.  This myopic move will only further dilute the viability of Java as an enterprise solution.  There is already a split within the Java community, many of us embracing the simplicity and reuse of the multiple technologies (Spring, Hibernate, Freemarker, Groovy, etc.) not directly espoused by the EE spec and leaving the complexities of EJBs and JSF behind.  I mean, I can certainly specify and write EJB 3.x and JSF; I just choose not to in most cases.  And do not even get me started on Oracle's WebLogic platform.

Large corporations seem to have a knack for destroying technologies with "kinetic stupidity".  IBM and Lotus Notes/Domino anyone?  However, isolating an entire segment of your long-time supporters just to make a few more dollars is just a colossal failure.  They had an opportunity here to move in a positive direction.  In true monolithic fashion, they took the opportunity to miss that opportunity.  It seems that they chose to increase their bottom-line at the expense of the community that has bolstered Java for the last decade or so.

Tuesday, July 12, 2011

JSR 303 – Patterns and Anti-patterns with Hibernate and Spring Web Flow

According to the Java Community Process (JCP), Java Specification Request (JSR) 303 defines “…a meta-data model and API for JavaBean validation based on annotations…”  Simply put, JSR 303 specifies a set of predefined annotations and a facility to create custom annotations that make it easier for Java Bean designers to define how their model objects will be validated at run time.  Similar to and often used with JPA annotations, Javax Bean Validation annotations provide an easy way to apply common validation patterns driven by annotated constraints.  These constraint annotations can specify default messages for constraint violations, or the messages can be resolved at runtime using the Java properties file facility.

In general, Java developers regularly apply design patterns when building applications or components.  During the last decade we have embraced annotated programming as a means of applying common patterns, and Javax Bean Validation annotations follow common patterns that reduce the variability in model validation.  Listing 1 is a snippet from an example Customer bean with both JPA and Javax Bean Validation annotations.

Listing 1
import java.util.Date;

import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.validation.constraints.Future;
import javax.validation.constraints.NotNull;
import javax.validation.constraints.Pattern;
import javax.validation.constraints.Size;

@Entity(name = "CUSTOMERS")
public class Customer {

    @Id
    @GeneratedValue(strategy = GenerationType.TABLE)
    private long id;

    @NotNull(message = "{Customer.firstName.NotNull}")
    @Size(min = 1, max = 30, message = "{Customer.firstName.Size}")
    @Column(name = "FIRST_NAME")
    private String firstName;

    @NotNull(message = "{Customer.lastName.NotNull}")
    @Size(min = 1, max = 30, message = "{Customer.lastName.Size}")
    @Column(name = "LAST_NAME")
    private String lastName;

    @NotNull(message = "{Customer.email.NotNull}")
    @Column(name = "EMAIL")
    @Pattern(regexp = "^[\\w-]+(\\.[\\w-]+)*@([a-z0-9-]+(\\.[a-z0-9-]+)*?\\.[a-z]{2,6}|(\\d{1,3}\\.){3}\\d{1,3})(:\\d{4})?$", message = "{Customer.email.Pattern}")
    private String email;

    @NotNull(message = "{Customer.phone.NotNull}")
    @Column(name = "PHONE_NUMBER")
    @Pattern(regexp = "^\\(?(\\d{3})\\)?[- ]?(\\d{3})[- ]?(\\d{4})$|^(\\d{3})[\\.](\\d{3})[\\.](\\d{4})$", message = "{Customer.phone.Pattern}")
    private String phone;

    @NotNull(message = "{Customer.lastActivityDate.NotNull}")
    @Future(message = "{Customer.lastActivityDate.Future}")
    @Column(name = "ACTIVITY_DATE")
    private Date lastActivityDate;

In Listing 1, the validation annotations are easy to understand.  The firstsName and lastName fields have constraints on them imposed by annotations that do not allow null values and that constrain values to a certain size.   The interesting thing about these @NotNull and @Size annotations is that they retrieve their respective constraint violation messages using a message key, {Customer.firstName.Size}, instead of defining a literal message string.  The message arguments of Javax Bean Validation annotations can take a literal string as a default constraint violation message, or a key that points to a property value in a Java properties file that is in the root classpath of the application.  Externalizing these string messages aligns the JSR 303 approach with other annotated programming and message processing techniques used by Java developers today.

The additional constraint annotation seen in the example in Listing 1 is @Pattern.  This annotation, shown in its simplest form, takes both message and regexp arguments.  The regexp argument is a Java string regular expression pattern that is applied as a matching constraint.   In my testing, I tried supplying this value via String Enum arguments and by looking it up from a properties file, much the same way messages are resolved.  This would not work as I kept getting the error, “The value for annotation attribute Pattern.regexp must be a constant expression.”  However, I was able to use a static String constant.  This seems to violate the convention of externalizing Strings to properties files; perhaps this will change in the near future.

Beyond size, null, and pattern constraints, JSR 303 has several other predefined constraint annotations that are listed in Figure 1.  One need only consult the API documentation to discover their usage.

Figure 1
Reference Implementations
Like many of the later JSR initiatives, the reference implementation (RI) for JSR 303 is done by non-Oracle, open source development.  In particular, JSR 303 is led by RedHat and the RI is based on Hibernate’s Validator 4.x.  A second implementation that also passed the Technology Compatibility Kit (TCK) is provided by the Apache Software Foundation.  In short this means that along with the Java Bean Validation API JAR from Oracle, one must also use one of these other implementations.  For this example I chose the RI from Hibernate.  Figure 2 shows the libraries, highlighted in yellow, required to use Javax Bean Validation.  Additional logging libraries are required by the Hibernate implementation.

Figure 2

 
With the Hibernate Validator implementation, there are several additional constraint annotations provided, see Figure 3.  You may notice the @Email and @URL annotations that provide constraints for well-formed email addresses and URLs, respectively.  Of course these are not part of the RI and are considered extensions, albeit very handy extensions.  Listing 2 is an example of what the email field would look like annotated by the @Email annotation.

Figure 3

Listing 2
@NotNull(message = "{Customer.email.NotNull}")
    @Column(name = "EMAIL")
    @Email(message = "{Customer.email.Email}")
    private String email;

Spring Web Flow Validation
The makers of Spring and Spring Web Flow have recognized the need for uniform Bean (model) Validation and have provided the ability to integrate JSR 303 Bean Validation into their containers.  This integration combines the best of industry standard Spring Web Flow with the functionality of JSR 303.  According to Spring Source, “Model validation is driven by constraints specified against a model object.”   For this validation Spring Web Flow embraces two methodologies for validation:  JSR 303 and Validation by Convention. 

One of the issues with these two validation techniques is that they are not mutually exclusive.  If JSR 303 validation is enable along with Spring’s Validation by Convention, duplicate error messages could be the result.  The approach that I recommend is to use validation by convention and setup the validation methods to call JSR 303 validation on model objects as needed.

Validation by Convention using JSR 303
Validation by Convention makes it simple to map validation to Spring Web Flows.  Listing 3 is a snippet from a Spring Web Flow definition, defining the enterCustomerDetails view-state that is bound to the Customer model object.

Listing 3
<view-state id="enterCustomerDetails" model="customer">
        <binder>
            <binding property="firstName" />
            <binding property="lastName" />
            <binding property="email" />
            <binding property="phone" />
            <binding property="lastActivityDate" />
        </binder>
        <transition on="proceed" to="reviewCustomerData" />
        <transition on="cancel" to="cancel" bind="false" />
    </view-state>


Using Validation by Convention we follow the Spring Web Flow pattern of ${Model}Validator and create the CustomerValidator class to handle validation calls from Spring Web Flow.  Inside this class, we must write methods that match the pattern validate${view-state} to link the validation routines to the corresponding Web flow view-state.  Listing 4 is a an example of the CustomerValidator with the validateEnterCustomerDetails() validation method.

Listing 4
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;

import org.springframework.binding.message.MessageBuilder;
import org.springframework.binding.message.MessageContext;
import org.springframework.binding.validation.ValidationContext;

import org.springframework.stereotype.Component;

@Component
public class CustomerValidator {

    public void validateEnterCustomerDetails(Customer customer,
            ValidationContext context) {
        Map<String, List<String>> propertyMap = new LinkedHashMap<String, List<String>>();
        boolean valid = ModelValidator.validateModelProperties(customer,
                propertyMap);

        if (!valid) {
            MessageContext messages = context.getMessageContext();
            for (Entry<String, List<String>> entry : propertyMap.entrySet()) {
                String key = entry.getKey();
                List<String> values = entry.getValue();
                if (null != key && !key.isEmpty() && null != values
                        && null != values.get(0) && !values.get(0).isEmpty()) {
                    messages.addMessage(new MessageBuilder().error()
                            .source(key).defaultText(values.get(0)).build());
                }
            }
        }
    }
}


In Listing 4, the validateEnterCustomerDetails() method is called by Spring Web Flow when a view-state transition occurs.  This method in turns calls the custom class/method ModelValidator.validateModelProperties() method and passes the model object and a map of bean properties to be validated in the model object.  This technique allows us to use the provided conventions of Spring Web Flow with the annotated constraints of JSR 303 Javax Bean Validation.

Listing 5, is the source for the ModelValidator class that does the heavy lifting and works with the bean validation for each model bean.  The idea here is that each Validation by Convention method, matching a Web Flow view-state, would make a call to the ModelValidator, passing in the desired properties to validate.

Listing 5
import java.util.ArrayList;
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;

import javax.validation.ConstraintViolation;
import javax.validation.Validation;
import javax.validation.Validator;
import javax.validation.ValidatorFactory;

public class ModelValidator {
    private static ValidatorFactory factory = Validation.buildDefaultValidatorFactory();
    private static Validator validator = factory.getValidator();

    public static boolean validateModelProperties(AbstractModel model, Map<String, List<String>> messages) {
    boolean isValid = true;

    if (null == messages) {
        messages = new LinkedHashMap<String, List<String>>();
    }

    Set<ConstraintViolation<AbstractModel>> constraintViolations = null;

    for (String key : messages.keySet()) {
        constraintViolations = validator.validateProperty(model, key);

        if (constraintViolations.size() > 0) {
        isValid = false;
        List<String> values = new ArrayList<String>();
        for (ConstraintViolation<AbstractModel> violation : constraintViolations) {
            values.add(violation.getMessage());
        }
        messages.put(key, values);
        }
    }

    return isValid;
    }
}


JSR 303 Custom Constraints
Along with the built-in constraint annotations provided by JSR 303 and Hibernate Validator, JSR 303 provides the facility to write your own custom constraints.  Custom constraints are needed when you want to apply additional logic to your model validation.  This logic would not be possible in the supplied constraints.  For example, if you had a Reservation model object and you needed to validate the check-in and check-out dates, applying the logic that the check-out date should always be after the check-in date, you would need to write a custom constraint.  With the supplied annotations, you can add constraints to force the dates to not be null and to be in the future (compared to today's date), but there is no supplied annotation that would perform the logic necessary for date comparison.

The thing to keep in mind is that creating custom constraints is a 2 step process.  These steps can be done in any order, but we will start with creating the constraint annotation first.  Listing 6 is an example of a custom constraint annotation.  This annotation is applied to the model class, similarly to the @Entity JPA annotation.  What should be noticed here is the validatedBy argument; its purpose is to link the constraint annotation to the implementation class, in this case ReservationDateRangeImpl.  In other words, ReservationDateRangeImpl will perform the actual validation.

Listing 6
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

import javax.validation.Constraint;
import javax.validation.Payload;

@Constraint(validatedBy = ReservationDateRangeImpl.class)
@Target({ ElementType.TYPE, ElementType.METHOD, ElementType.FIELD, ElementType.CONSTRUCTOR, ElementType.PARAMETER })
@Retention(RetentionPolicy.RUNTIME)
public @interface ReservationDateRange {

    String message() default "The check-out date must be after the check-in date.";

    Class[] groups() default {};

    Class[] payload() default {};

}


Listing 7 is the implementation class ReservationDateRangeImpl.  So when the model is validated, @ReservationDateRange will also be applied as a constraint.

Listing 7
import javax.validation.ConstraintValidator;
import javax.validation.ConstraintValidatorContext;

public class ReservationDateRangeImpl implements
        ConstraintValidator<ReservationDateRange, Reservation> {

    public void initialize(reservationDateRange reservationDateRange) {
    }

    public boolean isValid(Reservation reservation,
            ConstraintValidatorContext context) {
        if ((reservation.getCheckInDate() != null)
                && (reservation.getCheckOutDate() != null)
                && reservation.getCheckOutDate().before(
                        reservation.getCheckInDate())) {
            return false;
        }
        return true;
    }
}


A Word About Groups
Looking back at Listing 6, you might notice the line that refers to a Class[] array called groups.  Annotations can be organized into groups; moreover, it is possible for each annotation to belong to several groups, including the DEFAULT group.  With groups, you can specify what combination of constraints (supplied or custom) that you wish to apply during manual validation.  The detractor here is that each group is represented by a Java Class.  To date, I have used interfaces as my group classes.

Issues with Custom Constraints - Separation of Concerns or Anemic Domain
Before you go off and write your own custom constraint annotations, take a moment to reflect upon the purpose and intended behavior of your domain or model objects.  Albeit easy to do, writing custom constraint annotations to enforce business logic may not be the right call.  Adding business logic to domain or model objects is generally considered to be a bad idea since it violates the paradigm of "Separation of Concerns" and mixes the business logic layers of an application with the domain or model layer of the application.  I guess this really depends on what side you agree with in the "Anemic Domain" anti-pattern debate.

I have my beliefs, but I am not advocating in either direction.  One could argue that the Reservation object is incomplete if we cannot constrain the behavior of the check-in/out date fields.  It could also be argued that we have already violated the tenets of proper design by combining persistence in our domain objects, by adding JPA annotations to them.  I look at it from a pure OOP perspective which combines data and behavior in each object.  From there, I apply the "what makes the best sense" paradigm when designing my applications and using these techniques.

Sunday, July 10, 2011

Picking up old and new instruments: Java M(MX)Beans and Agents

Even though JMX and MBeans have been around for over a decade, there are a lot of Java developers that still have never written or used them. They may have used them with JEE application servers, even VisualVM and JConsole, but for the most part they have been relegated to major Java applications.  I submit that they are just as useful when you encounter smaller Java applications that need remote and graceful control.

Remotely and Gracefully
Remote control means accessing MBeans outside of the JVM that they reside in.  That's simple enough.  Graceful control is more esoteric.  What's graceful to one developer may not be so graceful to another or to the end user.  Suffice it to say that to me graceful means being able to control a Java program, including shutting it down and changing its processing, without introducing instability or uncontrolled results and possible data loss.  Is this type of control only important to larger Java applications?

The Example
The scenario:  You have a batch process that is automated via a Java program.  This program could be executing as part of a Windows Scheduled Task or a Unix/Linux CRON job.  The Java program processes file resources on a scheduled basis using multiple threads.  You need to introduce a method to interrupt processing in a deterministic manner so that there is no ambiguity around which files have been processed.  Part of your program already monitors threads as observables.  So, this is the optimum component to signal to the threads that it is time to stop, after a file is processed successfully and before the next file process is started.

There are several ways that this can be accomplished, not the least of which is a signal file that your threaded app reads between each file that is processes.  However, for deterministic control MBeans are a better choice.  And they are very easy to write.  Listing 1 shows how we would implement a simple MBean to control processing.  The ProgramControlMBean interface specifies our shutdown() method, and the ProgramControl implements this interface.  The MBean also has to register with the MBean server to be accessed and used as an MBean.  Listing 1 contains a private register() method to accomplish registration.  If you have one bean to register in your program, I would let the bean register itself.  For multiple MBeans, I might use a different component within the program to register all the beans.

Listing1
public interface ProgramControlMBean {
    public String shutdown();
}



public class ProgramControl implements ProgramControlMBean {
     public String shutdown() {
          //implement shutdown method
     }
     ...
     
     private void register() {
            try {
                 // Register as MBean with MBean Server
                 MBeanServer server = ManagementFactory.getPlatformMBeanServer();

                 ObjectName name = new ObjectName(
                    "PACKAGE_NAME_HERE:type=ProgramControl");

                 server.registerMBean(this, name);
            } catch (Throwable t) {
                 log.error("ProgramControl could not register with MBean server.");
            }

       }
}


With this combination we now have an MBean that will be available in the same JVM that our program runs, hosted by that JVM's JMX server.  If we have done things correctly we are able to access this MBean using the JMX API from within our program.  We are almost done.  We now need to be able to remotely call the MBean to shutdown our program.  This means accessing the JMX server that hosts this MBean from outside the JVM in which it runs.  The folks that created the JMX API also came up with an easy way to access the JMX server.

Listing 2 shows the JVM arguments that are used to start a Java Agent to allow remote access to MBean running in the JVM's JMX server.  Remote programs can access the JMX server via port 3334, authentication and SSL are not required.

Listing 2
-Dcom.sun.management.jmxremote.port=3334
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false


With these JVM arguments, we can easily start a Java Agent when we start the Java program.  Simply put, Java Agents are programs that are usually independent of other programs (including other agents), but loaded in the same JVM.  They have specific goals that are outside of the main program, like data collection or troubleshooting.

Custom Java Agent for Remote JMX/MBean Access
The Java Agent is necessary since the JMX/MBean server is not accessible by resources outside of the local JVM.  With the JVM arguments, it is very easy to configure an agent to allow us to connect to the target JMX server.  We could also write our own Java Agent as seen in Listing 3.

Listing 3
import java.io.IOException;
import java.lang.management.ManagementFactory;
import java.net.InetAddress;
import java.rmi.registry.LocateRegistry;
import java.util.HashMap;

import javax.management.MBeanServer;
import javax.management.remote.JMXConnectorServer;
import javax.management.remote.JMXConnectorServerFactory;
import javax.management.remote.JMXServiceURL;

public class RemoteJmxAgent {

    private RemoteJmxAgent() {
    }

    public static void premain(String agentArgs) throws IOException {
        System.setProperty("java.rmi.server.randomIDs", "true");

        // Start an RMI registry on port 3334
        final int port = Integer.parseInt(System.getProperty(
                "us.va.state.vwc.print.agent.port", "3334"));
        LocateRegistry.createRegistry(port);

        // Get handle to JMX MBean server
        MBeanServer server = ManagementFactory.getPlatformMBeanServer();

        // Environment map.
        HashMap env = new HashMap();

        //Setup RMI connector server
        final String hostname = InetAddress.getLocalHost().getHostName();
        JMXServiceURL url = new JMXServiceURL("service:jmx:rmi://" + hostname
                + ":" + port + "/jndi/rmi://" + hostname + ":" + port
                + "/jmxrmi");
        

        System.out.println("JMX URL::" + url);

        JMXConnectorServer connectorServer = JMXConnectorServerFactory
                .newJMXConnectorServer(url, env, beanServer);

        // Start server
        System.out.println("RMI connector server started on port " + port);

        connectorServer.start();
    }
}


The agent in Listing 3 starts an RMI connector server that will allow remote RMI clients (VisualVM, JConsole, etc.) to connect the JMX MBean server that runs in the JVM that the agent attaches to.  What you should immediately notice is the main() method has been replaced with a premain() method.  That's it, at least at the class definition level.  Getting agents to load when the JVM starts is a little more involved.  First there is the JVM arguments seen in Listing 4; agents must be packaged in their own JAR.

Listing 4
-javaagent agent.jar

And then there is the manifest file entry (Listing 5) that identifies the agent class in the JAR:

Listing 5
Premain-Class: RemoteJmxAgent

If one considers the complexity of writing the RemoteJmxAgent agent class and then adding authentication and transport level security, it just makes more sense that we would use the provided JVM agent configuration as seen in Listing 2.

MXBeans
As of Java 1.6, it is recommended that developers write MXBeans instead of MBeans; moreover, MXBeans are only available in 1.6 and beyond.  MXBeans are very similar to MBeans.  Instead of implementing an interface name ${CLASS_NAME}MBean the MXBean interface is ${CLASS_NAME}MXBean.  They are also registered with the MBean server the same way MBeans are.

The main difference between MBeans and MXBeans is that MXBeans, unlike MBeans, use Java Open Types.  The MXBean implementation maps custom objects returned from MXBeans to a CommonDataSupport type.  This means that the clients using the remote MXBean will not have to have concrete implementations of the classes defined as return types from the MXBean.  There are restrictions on the types returned from MXBeans, but this is still easier than having to remotely implement custom classes like we had to in old days of RMI.  Before using MXBeans, I would be familiar with the specification.

Conclusion
MBeans, MXBeans, and Agents are available to Java developers to help them instrument their Java applications.  I have shown a few examples of these powerful technologies; we must learn to walk before we can run.

Software Estimation - Fibonacci Meets PERT

In my previous post I described an organic process by which I solicited feedback from team members.  The feedback was a Fibonacci sequence number that was used to roughly estimate the complexity and/or work effort on a JIRA task.  This approach is only effective when you have a good team that you can rely on and trust to do the right thing.  This number is just an abstract level indicator of the effort that our development team estimates/recommends for a particular task.

Alone, the Fibonacci number is useless to the business users or project managers.  This is our number.  It carries with it our insights, based on our knowledge, and our experience.  To be useful to others, we must convert it to some metric that they can use.  Enter PERT.

If you are looking for an exhaustive instruction on estimating with PERT, look elsewhere.  I use the typical PERT formulas with a few tweaks here and there.  In the past I have used several multipliers:  Developer Skill Level, Conflicts, New Technology, Code Re-use, etc.  I now have found that the team discussion with Fibonacci output eliminates the need for most of these discrete multipliers.  If you don't trust your team's number, you can always change it.  However, be prepared to communicate to the team as to why you usurped their authority.

For conflicts, I use the industry standard of 0.7. That is to say that we only get 70% of our day to develop.  The rest is consumed with production support, administrative duties and other tasks that conflict with our development efforts. Most of the time, developers discount this or ignore it all together.  I use 1.3 as a multiplier to calculate the expanded time due to a 0.3 reduction in productivity.

An example spreadsheet is seen below.  Obviously as we approach the higher Fibonacci numbers the estimates approach a point at which they are no longer realistic.  This is not a one-size-fits-all approach.  In fact, this method requires tuning over time to better fit your teams velocity and accuracy.

There are multiple levers that can be used to adjust the PERT output of column "E".  This is part of the tuning process.

The main idea is that you remove the complexity of the feature estimates from your development team, when you are still forced to estimate and execute your project in a  waterfall fashion with reasonable predictability and reproducibility.