Wednesday, October 31, 2012

Java-based BDD with Cucumber

Recently we have been looking at creating tighter collaboration between our Developers, Testers, and Business Analysts (BAs).  They all have the common project-level goal of delivering high quality, but they each fill different roles in the development cycle.  The key to successful collaboration is the evolution of business requirements to stories to features to executable behavioral specifications in the form of unit tests.

Our projects are primarily Java-based (with some Groovy thrown in for fun), so we have been looking primarily at tools for Behavioral Driven Development (BDD) in Java.  To that end, we first looked at Spock.  Spock is very powerful and integrated well with Groovy.  This makes it very expressive and easy for our developers to learn, and quickly leverage.  There in lies the issue.  Spock, at least in our experience, is very developer-centric.  Feature specifications written in Groovy with Spock Domain Specific Language (DSL) are done by the developer, mostly without the BA or tester.

One can argue that pairing developers with testers and BAs would solve the developer centricity of Spock.  I have read the arguments for pair programming with regards to less defects overcoming combined resources.  However, while pair programming would help developers build the specs in Spock, it does not erase the fact that Spock is mostly for developers.

Developer centricity was always my argument against Test Driven Development (TDD).  While I understand the ideas behind TDD, it seemed to me that it was always lacking that collaboration, key input, from the folks capturing and testing the business requirements.  Writing JUnit tests to meet a requirement, then writing code to pass the JUnit tests is good, but still decoupled from the requirements and true behaviors of the system under development.

With BDD, the BAs can build the User Stories.  From these stories, the testers and developers can distill the features.  Testers can also derive the behavioral specifications that should be the foundations for testing the behaviors of features in the context of user stories.  Finally, it would be very nice if the testers, after agreeing on a standard DSL, could help the developers generate unit tests in an automated fashion, thereby reducing variability from the behavioral specification.  Enter Cucumber.

Cucumber was originally written in Ruby.  I am not a Ruby developer, so I never used the tool.  I heard really good things about it and how successful BDD was with it.  Finally, this year, Aslak Hellesoy ported Cucumber to Java (cucumber-jvm).  I was aware of Cuke4Duke, but I was just not eager to use JRuby. So when I heard about Cucumber in Java, I started my research.  What follows is my initial research and impression of the tool and measure of utility for BDD in Java.

Getting Started
To get started with Cucumber, I highly recommend the book, "The Cucumber Book: Behaviour-Driven Development for Testers and Developers".  Though it is written for Ruby development, its coverage of Cucumber concepts and Gherkin are well worth the read.  The Gherkin examples in the book, work in the Java port of cucumber.

For the uninitiated, Gherkin is the line-based language that Cucumber uses to define behaviors in the form of features, scenarios, and steps.  Gherkin files are feature files with the ".feature" file extension.  One of the first conventions to learn is that only one feature can be specified in a feature file.  The next is the Gherkin syntax.

Feature: Password Manager
  Scenario: Change password
    Given User is logged in
    And User is on edit profile page
    When User presses Edit Password button
    And User enters "value" for new password and repeats "value" for new password confirmation
    And User presses "Change password"
    Then User should see "Password changed"
In this example, the Password Manager feature contains the Change Password scenario.  This scenario is comprised of 6 steps arranged in typical Given, When, and Then Gherkin syntax.

Converting Gherkin to Java
After analysts or testers create the Gherkin feature definitions, the files can be used to generate the Java unit tests that get execute via JUnit.  The first step is to add the cucumber-jvm libraries to your Java project.  Below are the Maven dependencies from my project.

<dependency>
   <groupId>info.cukes</groupId>
   <artifactId>cucumber-core</artifactId>
   <version>1.0.14</version>
  </dependency>
  <dependency>
   <groupId>info.cukes</groupId>
   <artifactId>cucumber-junit</artifactId>
   <version>1.0.14</version>
  </dependency>
  <dependency>
   <groupId>info.cukes</groupId>
   <artifactId>cucumber-java</artifactId>
   <version>1.0.14</version>
  </dependency>
  <dependency>
   <groupId>info.cukes</groupId>
   <artifactId>cucumber-groovy</artifactId>
   <version>1.0.14</version>
  </dependency>
  <dependency>
   <groupId>junit</groupId>
   <artifactId>junit</artifactId>
   <version>4.10</version>
  </dependency>

Next, a unit test runner class is created to hook Cucumber into JUnit.  Below is an example class.

package com.icfi.cuke;

import org.junit.runner.RunWith;

import cucumber.junit.Cucumber;

@RunWith(Cucumber.class)
public class Test {
}

The Test.java file executes as a JUnit test case using the Cucumber.class.  When the the class is executed as a JUnit test case, cucumber-jvm reads the feature files in the same package as the test, or as specified by the @Cucumber.Options annotation.  During execution cucumber-jvm examines the behavioral specifications in the feature files and tries to match them Java test files. 

When first executed, typically there is no matching Java code for the test steps.  At this point, cucumber-jvm outputs what it thinks the Java and Ruby method stubs should be, based on the steps defined in the feature files.  Below is the output (Ruby code removed) from the first run with no matching methods.

@Given("^User is logged in$")
public void User_is_logged_in() throws Throwable {
    // Express the Regexp above with the code you wish you had
    throw new PendingException();
}

@Given("^User is on edit profile page$")
public void User_is_on_edit_profile_page() throws Throwable {
    // Express the Regexp above with the code you wish you had
    throw new PendingException();
}

@When("^User presses Edit Password button$")
public void User_presses_Edit_Password_button() throws Throwable {
    // Express the Regexp above with the code you wish you had
    throw new PendingException();
}

@When("^User enters \"([^\"]*)\" for new password and repeats \"([^\"]*)\" for new password confirmation$")
public void User_enters_for_new_password_and_repeats_for_new_password_confirmation(String arg1, String arg2) throws Throwable {
    // Express the Regexp above with the code you wish you had
    throw new PendingException();
}

@When("^User presses \"([^\"]*)\"$")
public void User_presses(String arg1) throws Throwable {
    // Express the Regexp above with the code you wish you had
    throw new PendingException();
}

@Then("^User should see \"([^\"]*)\"$")
public void User_should_see(String arg1) throws Throwable {
    // Express the Regexp above with the code you wish you had
    throw new PendingException();
}
At this point the reader gets the idea that Cucumber matches step methods to steps within the the feature files via Regular Expressions (RegEx).  In fact, the JUnit output indicates that the step definitions defined in the Gherkin feature file(s) have yet to be implemented. Below is the JUnit output indicating that the step definitions were not executed.  This also points out that the JUnit execution with the Cucumber Class uses the Features, Scenarios, and Steps defined in the features files, and looks for those implementations in the Java unit test code.



Writing the Java Unit Tests
To get started with the Java implementation of the Cucumber unit tests, we only need to paste the recommended code from above.  Once pasted, we execute the Test.java to pick up the Cucumber unit tests.  Only one executes and throws the PendingException, seen below. These are the stubs that we now have to implement.

cucumber.runtime.PendingException: TODO: implement me
 at com.icfi.cuke.PasswordManagerTest.User_is_logged_in(PasswordManagerTest.java:13)
 at ?.Given User is logged in(com\icfi\cuke\PasswordManager.feature:3)

At this point it is important to realize that Cucumber does not enforce how we implement the unit tests.  We could write any number of assertions that would pass or fail.  Cucumber only tries to link the steps defined in the features files with RegEx matches to annotated methods in the unit test classes.  Cucumber also enforces that the steps are executed in the order that they are defined.  The RegEx matches can be spread across multiple unit test classes as well.  Cucumber does not care where the tests actually are, just that they actually are there and in the right order.  Cucumber will arrange the multiple java unit test files in the correct order, if need be.

Matching Steps
The key to getting Gherkin steps to match Java test methods is using the correct statements, within your features files, that convey the correct RegEx in your Java annotations.  It is possible to use two different statements that actually result in the same RegEx annotations in Java.  For example, given the feature file below:

Feature: Password Manager
  Scenario: Change password
    Given User is logged in
    And User is on edit profile page
    When User presses Edit Password button
    And User enters "value" for new password and repeats "value" for new password confirmation
    And User presses "Validate password"
    And User presses "Change password"
    Then User should see "Password changed"
The step containing the strings "Validate password" and "Change password" would result in the same RegEx pattern: "^User presses \"([^\"]*)\"$"

However, Cucumber would only recommend this method once, for two reasons.  First, even though the Cucumber feature file has two similar statements, with the same RegEx pattern result, only one corresponding Java method is needed.  Cucumber will execute the Java method twice, to satisfy the defined steps.  The second reason is that if the Java method is actually written twice (same RegEx pattern and different Java method signature for Java compile reasons), an exception is thrown by the cucumber-jvm runtime, see below.

cucumber.runtime.DuplicateStepDefinitionException: Duplicate step definitions in 
com.icfi.cuke.PasswordManagerTest.User_presses(String) in 
file:/C:/UserData/Workspaces/TOMCAT/Cucumber/target/test-classes/ and 
com.icfi.cuke.PasswordManagerTest.User_presses1(String) in 
file:/C:/UserData/Workspaces/TOMCAT/Cucumber/target/test-classes/

So, even if the defined feature steps cause the same RegEx pattern, Cucumber is smart enough to reuse the same Java method.

Implementing Tests - The Rigor-Gap Remains
Obviously, Cucumber cannot reach into the test case and force the Java developer to write proper unit tests.  In fact, since Cucumber is underpinned by JUnit, the same rigor gaps can occur.  Cucumber will check that defined steps are covered with tests by matching steps to Java method RegEx annotations, and JUnit will check for assertions and other exceptions thrown during execution.  However, this does not force the Java developer to actually write a valid test.  That rigor is still up to the Java developer and he/she that reviews his/her code.  Unit test coverage of code can also be enforced with Sonar and other static code analysis tools.

Completing the Cycle
With Cucumber in your teams' toolboxes, they will be better at completing the collaboration cycle to succeed with BDD.  Cucumber allows your BA's and Testers's to write the Gherkin feature files that can be used to help the Java developer stub-out the Java behavioral specifications in the form of JUnit tests.  While the rigor-gap still exists for actually implementing the unit tests, this is partially overcome by using static code analysis.

Wednesday, September 19, 2012

PMI CVC PMP 2012 Fall Workshop - Communications and Risk Modules

I will be presenting on Sunday, October 7th, for the PMI Central Va Chapter during their Fall 2012 PMI Certification Workshop.  My topics will be Project Communications Management and Project Risk Management.

PMI CVC PMP 2012 Fall Workshop - Framework Talk

I will be presenting on September 22nd for the PMI Central Va Chapter during their Fall 2012 PMI Certification Workshop.  My topic will be the Project Management Framework.  My talk goes from 10:15 AM to 11:45 AM and touches on items found in sections 1 & 2, chapters 1, 2, & 3, of the PMBOK, version 4.

In my session I will be discussing these topics and more:


  • Projects, Portfolios, and Program
  • Process Groups, Knowledge Areas, and Processes
  • PMO
  • Project Life Cycle vs. Product Life Cycle
  • Stakeholder Management
  • Organizational Structure

Ever wondered how PMI keeps the PMP exam current and relevant?  This year we have also added information of the PMI Role Delineation Study and the Crosswalk.

This workshop is a great way to come up to speed for the PMP exam as well as gain valuable study tips from fellow project managers that have already passed the exam.  The workshop is also a great opportunity to gain the PDUs needed to maintain existing PMP certifications.  Best of all, attendees receive copies of all the slides presented at the workshop as well as other resources to help them study for the exam.

Integration Module
I will also be supporting the Integration Module session that starts at 11:45 and continues after lunch to 14:00.

Thursday, September 6, 2012

Spring Provisional/Conditional Bean Loading (Part 2)

In Part1 I explained two solutions for dynamically loading beans based on environments.  In this Part 2, I (quickly) extend the solution to handle conditional Bean Definition File imports.  Below is my new main.xml with a new custom tag, <profile:importIf>.  This new tag, along with the <profile:if> tag, will allow me to control individual bean and entire bean definition file loads via properties files.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
 xmlns:aop="http://www.springframework.org/schema/aop" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xmlns:oxm="http://www.springframework.org/schema/oxm" xmlns:lang="http://www.springframework.org/schema/lang"
 xmlns:context="http://www.springframework.org/schema/context"
 xmlns:profile="http://icfi.com/springbeans/profile"
 xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
      http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd
         http://www.springframework.org/schema/lang http://www.springframework.org/schema/lang/spring-lang-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd
         http://icfi.com/springbeans/profile http://icfi.com/springbeans/profile/profile.xsd">

 <import resource="beans.xml" />
 <import resource="beans2.xml" />
 <profile:if test="${Spring.ENV=='TEST'}" src="config.properties">
  <bean id="testBean" class="com.icfi.spring.init.beans.TestBean"
   name="ibean">
   <property name="valueOne" value="This is TEST." />
  </bean>
 </profile:if>

 <profile:if test="${Spring.ENV=='PROD'}" src="config">
  <bean id="prodBean" class="com.icfi.spring.init.beans.ProdBean"
   name="ibean">
   <property name="valueOne" value="This is PROD." />
  </bean>
 </profile:if>

 <profile:importIf test="${Spring.ENV=='DEV'}" src="config.properties"
  resource="context/DEV-beans.xml" />
</beans>
To make this work I have to modify some additional artifacts, profile.xsd, ProfileBeanNamespaceHandler.java, and ProfileBeanDefinitionParser.java.  These new artifacts are seen below.
<?xml version="1.0" encoding="UTF-8" standalone="no"?>

<xsd:schema xmlns="http://icfi.com/springbeans/profile"
 xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:beans="http://www.springframework.org/schema/beans"
 targetNamespace="http://icfi.com/springbeans/profile"
 elementFormDefault="qualified" attributeFormDefault="unqualified">

 <xsd:element name="if">
  <xsd:complexType>
   <xsd:sequence>
    <xsd:any minOccurs="0" />
   </xsd:sequence>
   <xsd:attribute name="test" type="xsd:string" use="required" />
   <xsd:attribute name="src" type="xsd:string" use="required" />
  </xsd:complexType>
 </xsd:element>
 
 <xsd:element name="importIf">
  <xsd:complexType>
   <xsd:sequence>
    <xsd:any minOccurs="0" />
   </xsd:sequence>
   <xsd:attribute name="test" type="xsd:string" use="required" />
   <xsd:attribute name="src" type="xsd:string" use="required" />
   <xsd:attribute name="resource" type="xsd:string" use="required" />
  </xsd:complexType>
 </xsd:element>

</xsd:schema>

package com.icfi.springbeans.profile;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.xml.NamespaceHandlerSupport;

public class ProfileBeanNamespaceHandler extends NamespaceHandlerSupport {
 private static Logger log = LoggerFactory
 .getLogger(ProfileBeanNamespaceHandler.class);
 
 public void init() {
  log.debug(this.getClass().getSimpleName()+"::initStart");
  super.registerBeanDefinitionParser("if",
    new ProfileBeanDefinitionParser());
  super.registerBeanDefinitionParser("importIf",
    new ProfileBeanDefinitionParser());
  log.debug(this.getClass().getSimpleName()+"::initEnd");
 }
}

package com.icfi.springbeans.profile;

import java.io.InputStream;
import java.util.HashMap;
import java.util.Map;
import java.util.ResourceBundle;

import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;

import org.apache.commons.lang3.StringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.config.BeanDefinition;
import org.springframework.beans.factory.config.BeanDefinitionHolder;
import org.springframework.beans.factory.support.BeanDefinitionReaderUtils;
import org.springframework.beans.factory.xml.BeanDefinitionParser;
import org.springframework.beans.factory.xml.BeanDefinitionParserDelegate;
import org.springframework.beans.factory.xml.ParserContext;
import org.springframework.util.xml.DomUtils;
import org.w3c.dom.Document;
import org.w3c.dom.Element;
import org.w3c.dom.NodeList;

public class ProfileBeanDefinitionParser implements BeanDefinitionParser {
 private static Logger log = LoggerFactory
   .getLogger(ProfileBeanDefinitionParser.class);

 private ResourceBundle bundle;

 /** SpEL prefix: "${" */
 public static final String PREFIX = "${";

 /** SpEL suffix: "}" */
 public static final String SUFFIX = "}";

 /**
  * Parse the "if" element, check for the required "test" and "src"
  * attributes.
  */
 public BeanDefinition parse(Element element, ParserContext parserContext) {
  try {
   if (DomUtils.nodeNameEquals(element, "if")
     || DomUtils.nodeNameEquals(element, "importIf")) {
    String test = element.getAttribute("test");
    String src = element.getAttribute("src");

    if (StringUtils.isNotEmpty(src)) {
     if (src.indexOf(".") > 0) {
      src = src.substring(0, src.indexOf("."));
     }

     bundle = ResourceBundle.getBundle(src);
    } else {
     throw new IllegalArgumentException(
       "src attribute not found.");
    }

    if (StringUtils.isNotEmpty(test)) {
     Map<String, String> map = this.getExpressionMap(test);

     String left = this.bundle.getString(map.get("left"));
     String right = map.get("right");

     if (left != null && right != null && left.equals(right)) {
      if (DomUtils.nodeNameEquals(element, "if")) {
       Element beanElement = DomUtils
         .getChildElementByTagName(element, "bean");
       return registerBean(beanElement, parserContext);
      } else if (DomUtils.nodeNameEquals(element, "importIf")) {
       String resource = element.getAttribute("resource");

       InputStream is = parserContext.getReaderContext()
         .getResourceLoader().getResource(resource)
         .getInputStream();

       Document doc = this.parse(is);
       NodeList elements = doc
         .getElementsByTagName("bean");
       for (int x = 0; x < elements.getLength(); x++) {
        Element bean = (Element) elements.item(x);
        this.registerBean(bean, parserContext);
       }
      }
     }
    } else {
     throw new IllegalArgumentException(
       "test attribute not found.");
    }
   }
  } catch (Exception e) {
   log.error(e.getMessage());
  }

  return null;
 }

 private Map<String, String> getExpressionMap(String value) {
  Map<String, String> map = new HashMap<String, String>();

  if (StringUtils.isEmpty(value)) {
   return null;
  }

  String entire = value.substring(PREFIX.length(), value.length()
    - SUFFIX.length());

  String left = entire.substring(0, entire.indexOf("=="));

  String right = entire.substring(entire.indexOf('\'') + 1,
    entire.lastIndexOf('\''));

  map.put("left", left);
  map.put("right", right);

  return map;
 }

 /*
  * Register Bean
  * 
  * @param element
  * 
  * @param parserContext
  * 
  * @return
  */
 private BeanDefinition registerBean(Element element,
   ParserContext parserContext) {
  BeanDefinitionParserDelegate delegate = parserContext.getDelegate();
  BeanDefinitionHolder holder = delegate
    .parseBeanDefinitionElement(element);
  BeanDefinitionReaderUtils.registerBeanDefinition(holder,
    parserContext.getRegistry());

  return holder.getBeanDefinition();
 }

 /*
  * JAXP Parser
  * 
  * @param is
  * 
  * @return
  * 
  * @throws Exception
  */
 private Document parse(InputStream is) throws Exception {
  DocumentBuilderFactory dbfactory = DocumentBuilderFactory.newInstance();
  DocumentBuilder builder = dbfactory.newDocumentBuilder();
  Document doc = builder.parse(is);
  return doc;
 }
}
This new solution will load individual beans based on conditions defined in Bean Definition Files, and will now import entire Bean Definition Files based on those same declared conditions. Of course the same caveats apply. Developers must manage bean ID and NAME collisions when using multiple conditions and imports.

Spring Provisional/Conditional Bean Loading (Part 1)

In Spring 3.1 (2011), profiles were introduced.  These Bean Definition Profiles allow Spring developers to set up different profiles for loading different beans in different environments.  I know several developers that are moving to 3.1 just for that feature.

However, what does one do if one needs to load different beans for different environments, but he/she cannot immediately upgrade to Spring 3.1?  In this blog entry I will cover that scenario with two solutions from long ago that still work today in Spring 3.0.x.

Solution #1:

Far and away the easiest solution is to use config file properties, the Spring PropertyPlaceholderConfigurer, and Spring Expression Language (SpEL).  Below is an XML snippet from a bean definition file.  The code uses SpEL to get at the Spring.ENV property found in the config.properties, and referenced as the Spring container starts to load.
…<bean id="icfi.propertyConfigurer"
  class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
  <property name="locations">
   <list>
    <value>classpath:config.properties</value>
   </list>
  </property>  
  <property name="ignoreUnresolvablePlaceholders" value="false" />
 </bean>
…
 <import resource="beans.xml" />
 <import resource="beans2.xml" />
 <import resource="${Spring.ENV}-beans.xml" />…
As long as the property is located before the Spring Container starts to load bean definitions, this will work to load files whose names match the resource arguments.  This solution has issues, not the least of which are that properties must be used and they must be coordinated to match file names related to bean definition files.

System properties can also be used in lieu of the PropertyPlaceholderConfigurer.

Solution #2:

This next solution is way more involved, requiring custom extensions to the Spring Container API, but is configuration properties file driven instead of system property driven.  To understand this approach, we start with what we would like to see in the bean definition file.  Below is the <profile:if> custom tag that will test whether certain beans are to be loaded.

<profile:if test="${Spring.ENV=='DEV'}" src="config.properties">
 <bean id="devBean" class="com.icfi.spring.init.beans.DevBean" name="ibean">
  <property name="valueOne" value="This is DEV." />
 </bean>
</profile:if>
In this example, the <profile:if> tag tests to see if the Spring.ENV property has the required value of "DEV" to load this bean.  The Spring.ENV property should be found in the config.properties resource bundle that is declared in the src attribute of the <profile:if> tag.  If the test passes (Spring.ENV == 'DEV') then the devBean will be loaded.

The plumbing for this solution extends the Spring Container API via the Extensible XML Authoring API with Java and XML Schema (XSD), along with two additional Spring configuration files.  It is rather complex, but I will try to simplify it here.

The Extensible XML Authoring API (EXAA for short) has been around since Spring 2.0.  However, it never really got the visibility that other Spring features enjoyed.  With EXAA, there are four steps to extending the API:

  1. Authoring XML schemas
  2. Coding Namespace handlers
  3. Coding BeanDefinitionParser extensions
  4. Registering the XSD and code artifacts with the Spring Container


I will start will authoring the schema.  Below is the XSD that defines the new custom tags we want to use. As a point of reference, I built this solution in Eclipse Helios SR2 using Maven.  I placed the new profile.xsd into directory in the image below.


The contents of the schema are seen below.  In this XSD I defined the target namespace (http://icfi.com/springbeans/profile) and the custom tag (if) with required attributes, test and src.

<?xml version="1.0" encoding="UTF-8" standalone="no"?>

<xsd:schema xmlns="http://icfi.com/springbeans/profile"
 xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:beans="http://www.springframework.org/schema/beans"
 targetNamespace="http://icfi.com/springbeans/profile"
 elementFormDefault="qualified" attributeFormDefault="unqualified">

 <xsd:element name="if">
  <xsd:complexType>
   <xsd:sequence>
    <xsd:any minOccurs="0" />
   </xsd:sequence>
   <xsd:attribute name="test" type="xsd:string" use="required" />
   <xsd:attribute name="src" type="xsd:string" use="required" />
  </xsd:complexType>
 </xsd:element>

</xsd:schema>
Once the XSD was done, I needed to register the schema using the appropriate properties file, spring.schemas, in the META-INF directory.
http\://icfi.com/springbeans/profile/profile.xsd=com/icfi/springbeans/profile/profile.xsd
Next, I needed to code the handler and parser. The handler, a simple class as it turns out, is used by the Spring Container to register the new custom Bean Definition Parser. My handler is below.
package com.icfi.springbeans.profile;

import org.springframework.beans.factory.xml.NamespaceHandlerSupport;

public class ProfileBeanNamespaceHandler extends NamespaceHandlerSupport {
 public void init() {
  super.registerBeanDefinitionParser("if",
    new ProfileBeanDefinitionParser());
 }
}
Next I needed to register the handler. For this I created the appropriate properties file, spring.handlers, in the META-INF directory.
http\://icfi.com/springbeans/profile=com.icfi.springbeans.profile.ProfileBeanNamespaceHandler
Below is the location in my Maven layout for both registration properties files mentioned so far.


The ProfileBeanDefinitionParser is referenced in the handler.  This parser does all the heavy lifting of parsing the custom tag, verifying the attributes, loading the defined resource bundle (properties file), testing the condition for bean load, parsing the bean definition, and then registering the bean with Spring.  Below is the custom Bean Definition Parser that I wrote.
package com.icfi.springbeans.profile;

import java.util.HashMap;
import java.util.Map;
import java.util.ResourceBundle;

import org.apache.commons.lang3.StringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.config.BeanDefinition;
import org.springframework.beans.factory.config.BeanDefinitionHolder;
import org.springframework.beans.factory.support.BeanDefinitionReaderUtils;
import org.springframework.beans.factory.xml.BeanDefinitionParser;
import org.springframework.beans.factory.xml.BeanDefinitionParserDelegate;
import org.springframework.beans.factory.xml.ParserContext;
import org.springframework.util.xml.DomUtils;
import org.w3c.dom.Element;

public class ProfileBeanDefinitionParser implements BeanDefinitionParser {
 private static Logger log = LoggerFactory
   .getLogger(ProfileBeanDefinitionParser.class);

 private ResourceBundle bundle;

 /** SpEL prefix: "${" */
 public static final String PREFIX = "${";

 /** SpEL suffix: "}" */
 public static final String SUFFIX = "}";

 /**
  * Parse the "if" element, check for the required "test" and "src" attributes.
  */
 public BeanDefinition parse(Element element, ParserContext parserContext) {
  try {
   if (DomUtils.nodeNameEquals(element, "if")) {
    String test = element.getAttribute("test");
    String src = element.getAttribute("src");

    if (StringUtils.isNotEmpty(src)) {
     if (src.indexOf(".") > 0) {
      src = src.substring(0, src.indexOf("."));
     }

     bundle = ResourceBundle.getBundle(src);
    } else {
     throw new IllegalArgumentException(
       "src attribute not found.");
    }

    if (StringUtils.isNotEmpty(test)) {
     Map<String, String> map = this.getExpressionMap(test);
  
     String left = this.bundle.getString(map.get("left"));
     String right = map.get("right");

     if (left != null && right != null && left.equals(right)) {
      Element beanElement = DomUtils
        .getChildElementByTagName(element, "bean");
      return parseRegisterBean(beanElement, parserContext);
     }
    } else {
     throw new IllegalArgumentException(
       "test attribute not found.");
    }
   }
  } catch (Exception e) {
   log.error(e.getMessage());
  }

  return null;
 }

 private Map<String, String> getExpressionMap(String value) {
  Map<String, String> map = new HashMap<String, String>();

  if (StringUtils.isEmpty(value)) {
   return null;
  }

  String entire = value.substring(PREFIX.length(),
    value.length() - SUFFIX.length());

  String left = entire.substring(0, entire.indexOf("=="));

  String right = entire.substring(entire.indexOf('\'') + 1,
    entire.lastIndexOf('\''));

  map.put("left", left);
  map.put("right", right);

  return map;
 }

 private BeanDefinition parseRegisterBean(Element element,
   ParserContext parserContext) {
  BeanDefinitionParserDelegate delegate = parserContext.getDelegate();
  BeanDefinitionHolder holder = delegate
    .parseBeanDefinitionElement(element);
  BeanDefinitionReaderUtils.registerBeanDefinition(holder,
    parserContext.getRegistry());

  return holder.getBeanDefinition();
 }
}
The config.properties file contains the Spring.ENV property that will be used to test the condition defined in the tag, test attribute. This file is found at the root of the src/main/resources Maven layout.

Below is a test class, SpringInitMotivator, that I used to exercise this solution,  Also seen below is the main.xml used to configure Spring in my application.  The SpringInitMotivator uses a helper class, SpringBeanFactory, to get at beans.  For this demo, I use the IBeans interface, implemented by the DevBean and ProdBean classes.  I load the beans via the name attribute and not the id, as multiple beans can not have the same ID, and I wanted to use a common name for both bean load conditions.
package com.icfi.spring;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.GenericXmlApplicationContext;

import com.icfi.spring.init.beans.IBeans;

public class SpringInitMotivator {
 private static Logger log = LoggerFactory
   .getLogger(SpringInitMotivator.class);

 public static void main(String[] args) {
  ApplicationContext ctx = new GenericXmlApplicationContext(
    "context/main.xml"); // Spring 3.0

  IBeans ibean = (IBeans) SpringBeanFactory.getBean("ibean");
  log.info(ibean.getValueOne());
 }
}

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
 xmlns:aop="http://www.springframework.org/schema/aop" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xmlns:oxm="http://www.springframework.org/schema/oxm" xmlns:lang="http://www.springframework.org/schema/lang"
 xmlns:context="http://www.springframework.org/schema/context"
 xmlns:profile="http://icfi.com/springbeans/profile"
 xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
      http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd
         http://www.springframework.org/schema/lang http://www.springframework.org/schema/lang/spring-lang-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd
         http://icfi.com/springbeans/profile http://icfi.com/springbeans/profile/profile.xsd">
 <import resource="beans.xml" />
 <import resource="beans2.xml" />
 <profile:if test="${Spring.ENV=='DEV'}" src="config.properties">
  <!-- <import resource="development-beans.xml" /> -->
  <bean id="devBean" class="com.icfi.spring.init.beans.DevBean" name="ibean">
   <property name="valueOne" value="This is DEV." />
  </bean>
 </profile:if>

 <profile:if test="${Spring.ENV=='PROD'}" src="config">
  <!-- <import resource="development-beans.xml" /> -->
  <bean id="prodBean" class="com.icfi.spring.init.beans.ProdBean" name="ibean">
   <property name="valueOne" value="This is PROD." />
  </bean>
 </profile:if>
</beans>

Wednesday, August 29, 2012

Storing Documents in MarkLogic via XCC

Integrating Java applications to MarkLogic involves using the MarkLogic XCC (XML Contentbase Connector). XCC is a set of APIs that support Java, .Net, etc.  XCC uses a MarkLogic XDBC server embedded in the MarkLogic server to connect to the XML databases on the MarkLogic server.

After the XDBC server is created in MarkLogic, you can test connectivity via a simple Hello World call, seen below.  Note that I use the admin/admin credentials that I set up when I installed MarkLogic.  In truth, MarkLogic has a very granular security scheme that can be used to control access and privileges.  SSL should also be used.  The Hello World example is from the XCC Developers Guide.
package com.icfi.marklogic;

import java.net.URI;
import java.net.URISyntaxException;

import com.marklogic.xcc.ContentSource;
import com.marklogic.xcc.ContentSourceFactory;
import com.marklogic.xcc.Request;
import com.marklogic.xcc.ResultSequence;
import com.marklogic.xcc.Session;
import com.marklogic.xcc.exceptions.RequestException;
import com.marklogic.xcc.exceptions.XccConfigException;

public class HelloWorld {
 public static void main(String[] args) throws URISyntaxException,
   XccConfigException, RequestException {

  URI uri = new URI("xcc://admin:admin@localhost:8050/Documents");
  ContentSource contentSource = ContentSourceFactory.newContentSource(uri);

  Session session = contentSource.newSession();
  Request request = session.newAdhocQuery("\"Hello World\"");
  ResultSequence rs = session.submitRequest(request);
  System.out.println(rs.asString());
  session.close();
 }
}

Once connectivity has been tested, you are ready to start storing documents.  Another example from the XCC Guide (customized for my use) can be seen below.  In this example, I again connect to the XDBC server that I created and get a session object.  It is important to state here that connection pooling is done automatically for you by XCC.  The API also supports JTA.


package com.icfi.marklogic;

import java.net.URI;
import java.net.URISyntaxException;

import com.icfi.marklogic.content.XmlContent;
import com.marklogic.xcc.ContentSource;
import com.marklogic.xcc.ContentSourceFactory;
import com.marklogic.xcc.Session;
import com.marklogic.xcc.exceptions.RequestException;
import com.marklogic.xcc.exceptions.RetryableXQueryException;
import com.marklogic.xcc.exceptions.XccConfigException;

public class ContentAdder {
 public static final int MAX_RETRY_ATTEMPTS = 5;
 public static final int RETRY_WAIT_TIME = 1000;

 public static void main(String[] args) throws URISyntaxException,
   XccConfigException, RequestException {
  URI uri = new URI("xcc://admin:admin@localhost:8050/Documents");
  ContentSource contentSource = ContentSourceFactory
    .newContentSource(uri);
  Session session = contentSource.newSession();
  session.setTransactionMode(Session.TransactionMode.UPDATE);

  // Re-try logic for a multi-statement transaction
  for (int i = 0; i < MAX_RETRY_ATTEMPTS; i++) {
   try {
    session.submitRequest(session
      .newAdhocQuery("xdmp:document-insert('/docs/catalog.xml', "
        + XmlContent.CATALOG + ")"));
    session.submitRequest(session
      .newAdhocQuery("xdmp:document-insert('/docs/bookstore.xml', "
        + XmlContent.BOOKSTORE + ")"));
    session.commit();
    break;
   } catch (RetryableXQueryException e) {
    try {
     Thread.sleep(RETRY_WAIT_TIME);
    } catch (InterruptedException ie) {
     // Ignore
    }
   }
  }
  session.close();
 }
}

In this example I also use the recommended approach to retrying operations against the XDBC server.  The code that does the "heavy-lifting" to store the documents is seen below.  In this code, the XCC API uses the session object to make a request to the XDBC server with a new ad-hoc query that uses the xdmp:document-insert function which is a built-in markLogic XQuery function.  In its simplest form the document-insert function takes a unique document URI and the XML document content.  In this example the XML content is provided by a Groovy-String (G-String) in XmlContent.groovy class.  I use Groovy string constants because G-Strings preclude me from having to write all that nasty java.lang.String concatenation.
session.submitRequest(session
      .newAdhocQuery("xdmp:document-insert('/docs/catalog.xml', "
        + XmlContent.CATALOG + ")"));

To verify that the docs were stored, I will go out to the MarkLogic Query Console (http://localhost:8000/qconsole/).  In the console, I can run  XQuery queries to verify that I stored the documents in the database.  Note:  When MarkLogic installs, it creates several databases, and when you create an XDBC server, you must choose a database to connect to.  I chose the "Documents" database, but I could have created a new one for this purpose.  Below is a screenshot of the Query Console.  I clicked on the "Explore" button to view a list of all the documents in this database.
If I wanted to view the contents of a document, I could run an XQuery as seen below, or I could also simply click on the document in the list.
The value in the XQuery doc function is the unique URI for the document in the MarkLogic database. This was a simple example of storing documents in MarkLogic.  In reality, considerable thought should be exercised to create the proper structures (Directories and Collections, etc.) that would be used to house and organize documents.  Organizing documents into directories and collections makes them easier to handle en masse if that requirement exists.  Another important point to make is that these docs were already XML.  Going forward, I will be serializing Java objects into XML via XStream. Before I can store Java objects as serialized XML, I need to map important attributes of my model objects to the MarkLogic container model.  To do this I wrote a custom Java Annotation, seen below.
package com.icfi.marklogic;

import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE)
public @interface Document {
 String documentUriPrefix() default "";
 String documentUriSuffix() default "";
 String collections() default "";
 String directory() default "";
 String properties() default "";
}
The Employee.java model class seen below uses the Document annotation to define the MarkLogic container specific semantics that would be used when the document representing the Java object is stored in MarkLogic.
package com.icfi.model;

import java.io.Serializable;
import java.text.SimpleDateFormat;
import java.util.Date;

import com.icfi.framework.Strings;
import com.icfi.marklogic.Document;

/**
 * Employee model object.
 * 
 * @author jimmyray
 * @version 1.0
 */
@Document(documentUriSuffix = "/employee.xml", 
  collections = "http://employees.none.com", directory = "/Employees/", properties="NEW")
public class Employee extends Person implements Serializable {
 private static final long serialVersionUID = 2523764855390968707L;
 private String id;
 private Address address;
 private String employeeId;
 private Date hireDate;
 private Department department;
 private String title;
 private int salary;

 public String getId() {
  return id;
 }

 public void setId(String id) {
  this.id = id;
 }
...

Seen below, the EmployeeServiceImpl processes the annotations on the Employee class to get at the metadata needed to process the documents in MarkLogic.
...
 private void processAnnotations(Employee employee)
   throws ClassNotFoundException {

  this.documentMap = new HashMap();

  Class clas = Class.forName("com.icfi.marklogic.Document");
  Document document = (Document) employee.getClass().getAnnotation(clas);

  String uri = document.directory() + employee.getId()
    + document.documentUriSuffix();

  String collections = document.collections();

  String properties = document.properties();

  this.documentMap.put(URI_KEY, uri);

  if (null != collections && !collections.equals("")) {
   this.documentMap.put(COLLECTIONS_KEY, collections);
  }

  if (null != properties && !properties.equals("")) {
   this.documentMap.put(PROPERTIES_KEY, properties);
  }

 }...



Below is a JUnit test that exercises the EmployeeServiceImpl class.  This test loads employees and stores them in the MarkLogic database using the employee service.
package com.icfi.marklogic;

import java.util.List;

import org.junit.Test;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.GenericXmlApplicationContext;

import com.icfi.model.Employee;
import com.icfi.services.EmployeeService;

public class EmployeeServiceTest {
 private static Logger log = LoggerFactory
   .getLogger(EmployeeServiceTest.class);

 @Test
 public void testEmployeeService() throws Exception {
  ApplicationContext ctx = new GenericXmlApplicationContext(
    "context/main.xml");

  EmployeeService employeeService = (EmployeeService) ctx
    .getBean("employeeService");

  List<employee> employees = employeeService.buildEmployees();

  for (Employee employee : employees) {
   System.out.println(employee);
  }

  employeeService.persistEmployees(employees);
  
  //employeeService.removeEmployees(employees);
 }
}
The persistEmployee() method (Seen below) of the EmployeeServiceImpl class persists employees documents into MarkLogic via the MarkLogicDao class.
public void persistEmployee(Employee employee) {
  try {
   this.processAnnotations(employee);
   dao.storeDocument(this.serialize(employee), this.documentMap);
  } catch (XccConfigException xce) {
   log.error(Strings.getStackTraceAsString(xce));
  } catch (RequestException re) {
   log.error(Strings.getStackTraceAsString(re));
  } catch (URISyntaxException use) {
   log.error(Strings.getStackTraceAsString(use));
  } catch (ClassNotFoundException cnfe) {
   log.error(Strings.getStackTraceAsString(cnfe));
  } catch (TransformerConfigurationException tce) {
   log.error(Strings.getStackTraceAsString(tce));
  } catch (TransformerFactoryConfigurationError tfce) {
   log.error(Strings.getStackTraceAsString(tfce));
  }
 }
This method performs an inline serialization using XStream.
private String serialize(Employee employee)
   throws TransformerConfigurationException,
   TransformerFactoryConfigurationError {
  
                XStream xstream = new XStream(new DomDriver());
  String xml = xstream.toXML(employee);
 
  return xml;
 }
The MarkLogicDao class does the heavy lifting  and interfaces with the MarkLogic database.  Its methods use the documentMap to get access to the metadata attached to the Employee objects via the Document annotation.
package com.icfi.marklogic;

import java.net.URI;
import java.net.URISyntaxException;
import java.util.Map;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import com.icfi.services.EmployeeService;
import com.marklogic.xcc.ContentSource;
import com.marklogic.xcc.ContentSourceFactory;
import com.marklogic.xcc.Session;
import com.marklogic.xcc.exceptions.RequestException;
import com.marklogic.xcc.exceptions.RetryableXQueryException;
import com.marklogic.xcc.exceptions.XccConfigException;

/**
 * DAO class to abstract the database layer from the Java application.
 * 
 * @author jimmyray
 * @version 1.0
 */
public class MarkLogicDao {
 private static Logger log = LoggerFactory.getLogger(MarkLogicDao.class);

 public static final int MAX_RETRY_ATTEMPTS = 5;
 public static final int RETRY_WAIT_TIME = 1;

 private Session session;

 /**
  * Store multiple XML documents
  * 
  * @param data
  * @param map
  * @throws XccConfigException
  * @throws RequestException
  * @throws URISyntaxException
  */
 public void storeDocuments(String[] data, Map<String, String> map)
   throws XccConfigException, RequestException, URISyntaxException {
  for (String doc : data) {
   this.storeDocument(doc, map);
  }
 }

 /**
  * Store a single XML document.
  * 
  * @param data
  * @param map
  * @throws URISyntaxException
  * @throws XccConfigException
  * @throws RequestException
  */
 public void storeDocument(String data, Map<String, String> map)
   throws URISyntaxException, XccConfigException, RequestException {

  log.debug("Storing " + map.get(EmployeeService.URI_KEY));

  this.buildSession();

  session.setTransactionMode(Session.TransactionMode.AUTO);

  // Re-try logic for a multi-statement transaction
  for (int i = 0; i < MAX_RETRY_ATTEMPTS; i++) {
   try {
    log.debug("request 1");
    session.submitRequest(session
      .newAdhocQuery("xdmp:document-insert('"
        + map.get(EmployeeService.URI_KEY) + "', "
        + data + ")"));

    if (map.containsKey(EmployeeService.COLLECTIONS_KEY)) {
     log.debug("request 2");
     session.submitRequest(session
       .newAdhocQuery("xdmp:document-set-collections('"
         + map.get(EmployeeService.URI_KEY) + "', '"
         + map.get(EmployeeService.COLLECTIONS_KEY)
         + "')"));
    }

    if (map.containsKey(EmployeeService.PROPERTIES_KEY)) {
     log.debug("request 3");
     session.submitRequest(session
       .newAdhocQuery("xdmp:document-set-properties('"
         + map.get(EmployeeService.URI_KEY) + "', "
         + map.get(EmployeeService.PROPERTIES_KEY)
         + ")"));
    }

    //session.commit();
    break;
   } catch (RetryableXQueryException e) {
    try {
     Thread.sleep(RETRY_WAIT_TIME);
    } catch (InterruptedException ie) {
     Thread.currentThread().interrupt();
    }
   }
  }
  session.close();
 }

 /**
  * Delete an XML document
  * 
  * @param docUri
  * @throws URISyntaxException
  * @throws XccConfigException
  * @throws RequestException
  */
 public void deleteDocument(String docUri) throws URISyntaxException,
   XccConfigException, RequestException {
  this.buildSession();

  // session.setTransactionMode(Session.TransactionMode.UPDATE);

  // Re-try logic for a multi-statement transaction
  for (int i = 0; i < MAX_RETRY_ATTEMPTS; i++) {
   try {
    session.submitRequest(session
      .newAdhocQuery("xdmp:document-delete('" + docUri + "')"));
    // session.commit();
    break;
   } catch (RetryableXQueryException e) {
    try {
     Thread.sleep(RETRY_WAIT_TIME);
    } catch (InterruptedException ie) {
     Thread.currentThread().interrupt();
    }
   }
  }
  session.close();
 }

 /*
  * Build the MarkLogic session needed for other operations.
  * 
  * @throws URISyntaxException
  * 
  * @throws XccConfigException
  */
 private void buildSession() throws URISyntaxException, XccConfigException {
  if (null != session && !session.isClosed()) {
   return;
  }

  URI uri = new URI("xcc://admin:admin@localhost:8050/Documents");
  ContentSource contentSource = ContentSourceFactory
    .newContentSource(uri);
  this.session = contentSource.newSession();
 }
}
The purpose of the MarkLogicDao is to abstract the access layer and prototype the XDMP calls to the MarkLogic XCC API.  The screen shot below shows the documents loaded by their unique URIs and their collections.
By clicking on the (properties) link, you can access the document properties metadata.  These metadata are helpful when you want to process documents and keep track of which ones have been processed or other in process statuses.  Below are the properties for one of the employee docs.


  NEW
  2012-09-05T15:24:28-04:00

Going forward I will discuss the power behind XPath and XQuery embedded in MarkLogic.

Tuesday, August 28, 2012

MarkLogic - NoSQL for XML

Lately my application designs have included more solutions for in-flight "data durability".  Like durability in ACID, users need their data stored between when they start a process and when they finish it, within an application.  And these data are of course contained in the application domain model object graph.

One of the ideas I have been working on is to swap out the traditional object model and bind my forms to XML with JAXB.  In fact, modeling forms and data captured as XML documents, that are munged or translated and exposed as Java objects when needed, increases flexibility in my designs.  This approach makes it easier to stuff the entire XML blob into a database, and recall it later to re-inflate the objects for the users' sessions.

For this approach to work, I need a way to quickly serialize and deserialize (XStream) my Java objects into XML, and I need a fully functional XML data store (MarkLogic).   I chose MarkLogic over a RDBMS solution because I wanted access to the XML content beyond storing/retrieving XML BLOB/CLOB fields.  Yes, Oracle has XML DB, but I wanted a more schema-less approach that could be offered by NoSQL.

MarkLogic is a great solution for this approach, as it offers the performance needed for document storage and retrieval en masse, coupled with the flexibility to search, read, and modify XML document internals (documents and nodes) with XPath and XQuery.

In future blogs, I will detail my integration to MarkLogic via available Java APIs and usage of the MarkLogic server tools, XPath and XQuery asd well as the MarkLogic XCC API.

Tuesday, August 7, 2012

Speaking at JavaMUG in Dallas, TX on 8/8/12

I am speaking in Dallas, TX at the JavaMUG on 8/8/12.  My topic is MongoDB integration with Spring Data.  The abstract and agenda are below.

Abstract
MongoDB (short for Humongous Database) is a document-oriented NoSQL database written in C++ that stores data in JSON-like (BSON) documents into dynamic schemas.  MongoDB is emerging as one of the leaders in the document-oriented NoSQL space.  Spring Data is one of the latest offerings from the Spring Source Community, and is focused on improving data access and persistence.  Leveraging the venerable power of the Spring Framework, Spring Data purports to deliver a “familiar and consistent” programming model underpinned by the Spring Services and its IOC container.

This session will explore Spring Data integration to MongoDB while covering the following key points:  Intro to MongoDB, Intro to MongoVIEW, Intro to Spring Data, SpringData and MongoDB configuration, Spring Data templates , repositories, Query Method Conventions, Custom queries (custom finders), Customizing repositories, Meta Data Mapping, Indexes, and Database References.

Agenda
Quick introduction to NoSQL and MongoDB
Configuration
MongoView
Introduction to Spring Data and MongoDB support
Spring Data and MongoDB configuration
Templates
Repositories
Query Method Conventions
Custom Finders
Customizing Repositories
Metadata Mapping (including nested docs and DBRef)
Aggregation Functions
Advanced Querying
GridFS File Storage
Indexes

Wednesday, July 11, 2012

Do You Trust Your Mechanic?

Most of us have seen commercials about oils or other auto parts or tools, where the theme of the commercial is the fact that most mechanics choose the advertised brand.  Advertisers expect us to choose these brands because we usually trust our mechanics.  Why waste time using stuff that mechanics don't use or approve?  In fact, when we consciously choose not to heed our mechanics' advise, it usually has something to do with price.  Perhaps this will happen with internet browsers someday.

Some organizations are hopelessly locked into supporting older web browser technology; like IE7.  For what ever reason (price?), they have stayed on older Windows and IE versions.  For those of you not keeping up, IE7 came out in 2006.  So, these organizations attempt to use 2012 ideas to design and implement 2012 interfaces, and make them regress and work with 2006 technology.  It's like hooking your 8-track deck to your Bose Sound System.

For the uninitiated, there are at least two camps when it comes to web browser market share:  mechanics and customers.  Mechanics are the web developers, graphics artists, and programmers that make the Internet do the cool things that make all our lives easier.  Customers are, well, customers.  They are usually non-technical when it comes to web application development, and they use the most commonly available tool to browse the web in mediocrity.  Of course there are exceptions to this rule, but for the most part, it stands.

So when we talk to customers about browser market share, in order to convince them that they need to stop dragging the IE7 ball-and-chain, we need to use the customer market share statistics.  These "charitable" stats show IE7 in a better light than the mechanics' stats.  According to most internet mechanics, IE7 is not just dead, it is buried.  In fact, most internet mechanics (that I work with) don't use IE anything.  We are instead using Firefox or Chrome, or Safari.  

I guarantee that our web browser experiences are orders-of-magnitude better than that of our customers.  And if our customers would climb out of their dusty comfort zones, and realize that there is a better class of internet tools available, they would also experience the Internet in a whole new way.  We could then deliver faster and more functional applications without needing to downshift.  Until then, however, we will be forced to sacrifice usability and user experience for backwards compatibility with backwards and outdated tools.  

The Internet marches on, albeit with some of our customers trying to drag it backwards.  Listen to you mechanics, and use the right tool.

Wednesday, May 16, 2012

MongoDB and Spring Data

This is a follow-up to my blog entry on 2012-04-28:  MongoDB - Jongo and Morphia.  In that post I talked about Java and MongoDB integration with the Jongo and Morphia APIs.  Today I am going to talk about using the Spring Data API with MongoDB; moreover, I will be focusing on the data repository approach.  At the time of this writing, Spring Data for MongoDB is at release 1.01 GA.

If you have not used Spring Data, then there is no time like now.  This is not a tutorial on Spring Data, but simply an example of how I used the Spring Data Repository approach to integrate with MongoDB.  Like my previous blog, I start with the Employee domain object.




The Employee object is composed of two other objects, Address and Department, and extends Person.  In the Employee object we use the class-level @Document annotation to map the Employee as a MongoDB document entity.  It will show up in the employee collection.  Optionally, I have included the @Id Spring Data annotation to map the internal _id field to the MongoDB ObjectID data type.  I have also elected to create two secondary indexes for the employeeId and hireDate fields using the @Indexed annotation.  When stored in MongoDB, there will be three indexes created.



The indexes created from the @Indexed annotation will be created with the default settings passed in as arguments to the annotations.  As seen below, I have defined the index on the employeeId field to be unique and sparse.  The unique index has a similar effect as a unique key constraint in a RDBMS solution.  When using the save() method of the repository, if storage of a document is attempted, and the document has a value in a field that is indexed uniquely, the document is overwritten; no duplicates are created.

The sparse property set to true tells MongoDB only to include documents in this index that actually contain a value in the indexed field.




That's it for the Employee object.  Unlike the Morphia approach there are no Spring Data annotations in the Address or Department domain objects (unless I want more indexing); these objects will be stored in MongoDB as embedded documents to the Employee document.

Next I create a MongoDB Spring Data Repository interlace by extending the MongoRepository interface.  Spring Data Repositories are very powerful, in so much as they rely on the typical Spring context loading mechanism, and they create the implementation code for you.  No where in my example will you see me writing the implementation for my repository interface.  Spring Data generates that implementation for me when the Spring context is loaded.

Code generation is augmented by a method naming convention seen below.  By naming my methods using the Spring Data convention, I can have Spring Data auto generate the implementation code.  For example, it is easy to see that if I use the findByLastName(String lastName) method Spring Data will be able to tell, using JavaBeans conventions, that there is field named lastName in my object and in the MongoDB database/collection.  This method can return multiple Employee documents, so the return type is a list of employees.


Of course this is mainly for the basic CRUD operations. 

According to Spring Data MongoDB documentation there are a set of available methods in the Repository interface.  Insert is not one of them.  If insert is needed, then the Spring Data Template for MongoDB should be used.



More complex queries to the database would require some specific implementation in the repository, possibly using the @Query annotation.  For example, if I wanted a specific query for lastName and department.name, I could write the method findByEmployeeLastNameAndDepartmentName() and annotate it with the MongoDB JSON formatted query like so.  In this example I still don't write the Java implementation, just the specific MongoDB query.  Spring Data does the rest. 

 

You will also notice that findByLastNameAndDepartmentName() method.  In this version, you see the complexity and power of the Spring Data method parser and query generator.  It parses through the method signature, finds lastName as a field, and department.name as a nested field.  I really did not need the @Query annotation after all.

The EmployeeServiceImpl uses the EmployeeRepository directly.  It is injected into the service implementation via Spring setter injection, as configured in the config XML.  The important thing to remember here is that there is never any implementation code written for EmployeeRepository.





 
Under the covers I have ensured the proper plumbing of database connectivity via the Spring Data config elements in a context config file.  In this file, I have the typical Spring headers with the appropriate schema definitions and locations.  After that house keeping, I define the mongo instance and the mongo repository locations.  Then I define the mongo template to be used by the mongo repository.  The repository needs the template for database connectivity via a factory pattern.


To run this example, I use the class below.  I loaded the Spring XML config via the GenericXmlApplicationContext.



Like many other Spring examples, there are usually several ways to accomplish the same thing.  I have used a mixture of annotations and XML configurations that I am comfortable with.  It's hard to beat the simplicity of "convention over configuration" offered by the Spring Data Repositories ORM approach.  The parsing of repository interfaces and implementation code generation is done at Spring container start-up time.  So there is no hit when your application code goes to use the data repository.