Saturday, November 19, 2011

Where's My Exceptions, Spring Data MongoDB?

Abstraction For the Win!


To be fair, the Spring Data MongoDB project is currently only at Milestone releases (as of writing, they are up to M5).  Unlike most open source projects, they do have fairly good reference documentation.  Recently, we decided to add some unique contraints to a document by adding a compound index.  The Spring Data MongoDB project provides a couple mechanisms to be able to do this:

  1.  Use the MongoTemplate class's ensureIndex method to programmatically create an index at runtime.
  2. Use the CompoundIndexes and CompoundIndex annotations to declare the indexes on the document model class.
  3. Manually create the index(es) against the database using the MongoDB command line.
For a variety of reasons, I decided to go with option #2.  Using the annotations is pretty straight forward:
package test;

import org.springframework.data.mongodb.core.index.CompoundIndex;
import org.springframework.data.mongodb.core.index.CompoundIndexes;
import org.springframework.data.mongodb.core.mapping.Document;

@Document(collection="people")
@CompoundIndexes(value={
    @CompoundIndex(name="people_first_name_address_idx", def="{'firstName':1, 'address':1}", unique=true),
    @CompoundIndex(name="people_last_name_address_idx", def="{'lastName':1, 'address':1}", unique=true)
})
public class Person {

    private String address;
    private String firstName;
    private String lastName;

    ...
}
The example above declares two MongoDB compound indexes.  In the first one, it is creating an index on the firstName and address properties of the document.  The ":1" tells the index to sort that column in ascending order for the index (see the org.springframework.data.mongodb.core.query.Order class's Javadoc for more details on sort orders).  The "unique=true" property tells MongoDB to reject any inserts/saves that violate this contraint (think a unique contraint in the SQL world).  There are other properties on the CompoundIndex annotation, so refer to the Spring Data MongoDB Javadocs for more information.  When the application starts up, the Spring Data MongoDB library listens for the application start event via Spring and will create the indexes automatically (if they don't already exist).  This is a benefit over options 1 and 3 above that require manual intervention.


So Why Didn't It Work?


According to the paragraph above, it is pretty easy to set up indexes using Spring Data MongoDB.  You annotate your classes, start your application and run some tests to make sure the unique constraint is being honored, right?  That's what I thought too.  I started out by annotating my document objects and re-building my application.  Before installing and starting my application in Tomcat, I decided to completely drop my database from MongoDB to ensure that everything was being created properly.  Once I was sure everything was clean, I installed my application and started Tomcat, causing Spring Data MongoDB to create the database, the collections and the indexes when the web application started. I verified this by running the following command in MongoDB to see the indexes existed:
MongoDB shell version: 1.8.2
connecting to: test
> db.people.getIndexes()
[
        {
                "name" : "_id_",
                "ns" : "test.people",
                "key" : {
                        "_id" : 1
                },
                "v" : 0
        },
        {
                "name" : "people_first_name_address_idx",
                "ns" : "test.people",
                "dropDups" : false,
                "sparse" : false,
                "unique" : true,
                "key" : {
                        "firstName" : 1,
                        "address" : 1
                },
                "v" : 0
        },
        {
                "name" : "people_last_name_address_idx",
                "ns" : "test.people",
                "dropDups" : false,
                "sparse" : false,
                "unique" : true,
                "key" : {
                        "lastName" : 1,
                        "address" : 1
                },
                "v" : 0
        }
]
> 
This allowed me to verify that Spring Data MongoDB actually did create the indexes at startup.  So far, so good.  My next step was to insert some data to the collection via my application.  This worked and I was able to verify the document in MongoDB by using the .find({}) operation on the collection from the command line.  The next step was to attempt to insert the exact same document, which should fail due to the unique constraints.  To my surprise, it did not fail and I did not receive any exceptions from the MongoTemplate class (which executed the insert).  Just to make sure I wasn't crazy, I took the JSON and inserted it directly to the collection using the .save({...}) operation on the collection via the Mongo command line.  It did exactly what I expected it to do:
E11000 duplicate key error index: test-people.$people_first_name_address_idx  dup key: { : "John", : "123 Fake Street" }
This meant that index was working.  So what was Spring Data MongoDB's problem?  What was happening to the error?  After some Google-foo, I stumbled across this JIRA issue:  https://jira.springsource.org/browse/DATAMONGO-134.  Hidden in there was the answer to my problem.  By default, the MongoTemplate class uses the default WriteConcern from the MongoDB Java Driver library.  The default WriteConcern as it turns out does NOT raise exceptions for server errors, only network errors.  This means that you will only receive an exception if you lose connection to the database or try to connect to an invalid address/port and will not receive an exception for any errors generated by MongoDB.  Lame, but easy to fix.  The WriteConcern class comes with some static constants that define the following write concern options:
/** No exceptions are raised, even for network issues */
    public final static WriteConcern NONE = new WriteConcern(-1);

    /** Exceptions are raised for network issues, but not server errors */
    public final static WriteConcern NORMAL = new WriteConcern(0);
    
    /** Exceptions are raised for network issues, and server errors; waits on a server for the write operation */
    public final static WriteConcern SAFE = new WriteConcern(1);
    
    /** Exceptions are raised for network issues, and server errors and the write operation waits for the server to flush the data to disk*/
    public final static WriteConcern FSYNC_SAFE = new WriteConcern(true);

    /** Exceptions are raised for network issues, and server errors; waits for at least 2 servers for the write operation*/
    public final static WriteConcern REPLICAS_SAFE = new WriteConcern(2);

So, depending on your needs, you can change the write concern options used by the
MongoTemplate class.  Since I was using Spring to instantiate the MongoTemplate class, this required a couple of changes to my applicationContext.xml file:
<beans 
    xmlns:context="http://www.springframework.org/schema/context"      
    xmlns:mongo="http://www.springframework.org/schema/data/mongo" 
    xmlns:util="http://www.springframework.org/schema/util"  
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
    xmlns="http://www.springframework.org/schema/beans" 
    xsi:schemalocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
    http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd
    http://www.springframework.org/schema/data/mongo http://www.springframework.org/schema/data/mongo/spring-mongo-1.0.xsd
    http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-3.0.xsd">

    ...

    <mongo:db-factory dbname="${mongodb.database}" host="${mongodb.host}" id="databaseFactory" password="${mongodb.password}" port="${mongodb.port}" username="${mongodb.username}" />

    <bean class="org.springframework.data.mongodb.core.MongoTemplate" id="mongoTemplate">
        <constructor-arg name="mongoDbFactory" ref="databaseFactory" />
        <property name="writeConcern">
            <util:constant static-field="com.mongodb.WriteConcern.SAFE" ></util:constant>
        </property>
    </bean>
    ...  
</beans>

After making this change and restarting the application, I finally go the exception I was expecting to receive from Spring Data MongoDB:

2011-11-18 15:44:32,913 ERROR - Unable to create or update person '{"firstName" : "John", "lastName" : "Doe", "address": "123 Fake Street"}'.
org.springframework.dao.DuplicateKeyException: E11000 duplicate key error index: test.people.$people_first_name_address_idx  dup key: { : "John", : "123 Fake Street"}; nested exception is com.mongodb.MongoException$DuplicateKey: E11000 duplicate key error index: test.people.$people_first_name_address_idx  dup key: { : "John", : "123 Fake Street"};
 at org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:53)
 at org.springframework.data.mongodb.core.MongoTemplate.potentiallyConvertRuntimeException(MongoTemplate.java:1373)
 at org.springframework.data.mongodb.core.MongoTemplate.execute(MongoTemplate.java:333)
 at org.springframework.data.mongodb.core.MongoTemplate.saveDBObject(MongoTemplate.java:739)
 at org.springframework.data.mongodb.core.MongoTemplate.doSave(MongoTemplate.java:679)
 at org.springframework.data.mongodb.core.MongoTemplate.save(MongoTemplate.java:669)
 at org.springframework.data.mongodb.core.MongoTemplate.save(MongoTemplate.java:665)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
 at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
 at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
 at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
 at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
 at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
 at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1465)
 at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1396)
 at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1345)
 at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1335)
 at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
 at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
 at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
 at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
 at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
        at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
 at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
 at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
 at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
 at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
 at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
 at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
 at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
 at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:849)
 at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
 at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:454)
 at java.lang.Thread.run(Thread.java:680)
Caused by: com.mongodb.MongoException$DuplicateKey: E11000 duplicate key error index: test.people.$people_first_name_address_idx  dup key: { : "John", : "123 Fake Street"};
 at com.mongodb.CommandResult.getException(CommandResult.java:80)
 at com.mongodb.CommandResult.throwOnError(CommandResult.java:116)
 at com.mongodb.DBTCPConnector._checkWriteError(DBTCPConnector.java:126)
 at com.mongodb.DBTCPConnector.say(DBTCPConnector.java:148)
 at com.mongodb.DBTCPConnector.say(DBTCPConnector.java:132)
 at com.mongodb.DBApiLayer$MyCollection.insert(DBApiLayer.java:262)
 at com.mongodb.DBApiLayer$MyCollection.insert(DBApiLayer.java:217)
 at com.mongodb.DBCollection.insert(DBCollection.java:71)
 at com.mongodb.DBCollection.save(DBCollection.java:633)
 at org.springframework.data.mongodb.core.MongoTemplate$13.doInCollection(MongoTemplate.java:745)
 at org.springframework.data.mongodb.core.MongoTemplate.execute(MongoTemplate.java:331)
 ... 41 more

So, it is really hard to blame the Spring Data MongoDB guys for this issue, as it is really a configuration option of the underlying MongoDB Java Driver.  However, the
MongoTemplate class does have a setWriteConcern method for this very reason and it would have saved me some time if the reference documentation had mentioned this and/or had some examples on how to change the setting.  I guess that will be in the "release" :).

Monday, September 26, 2011

Planes, Trains, and Automobiles

Fun with Airports

If I had been able to sing "Mess Around" with John Candy,
maybe my trip wouldn't have been as bad either.
At some point, if you fly enough, you go through one of those "trips from hell" thanks to the airline industry.  Weather gets in the way of travel or you get delayed just enough that you miss that connection and spend the night sleeping in the airport in Little Rock, Arkansas.  Then there are those trips that just highlight the complete breakdown in rational thought and competence.  This is the story of my attempt to get from Burlington, VT to Washington-Dulles International Airport.  It will make you laugh.  It will make you cry.  It will make you understand why you should never fly United Airlines.  It will also highlight why doing just a little extra to help your customers goes a long way (instead of choosing the tactics United uses to treat its customers like boxes of diapers being shipped from a distribution center to a Walmart -- i.e., it gets there when it gets there).

The Storm on the Horizon

In retrospect, it would have made more
sense to just rent a car and drive the 10 hours.
I finished up at work and headed to the airport a little after 5 PM on Friday.  My flight was scheduled to leave Burlington, VT at 7:02 PM (Flight 3912).  It was a direct flight to Washington-Dulles International Airport (IAD), which was suppose to hand a little after 9 PM.  I got off the rental car, made it through security, and sat down at the gate at around 5:40 PM.  At around 6:00 PM, the gate agent got on the PA and informed us that the plane had not yet left IAD due to a "maintenance issue" and that we probably wouldn't be leaving any time before 7:45 PM.  She said that she could help people rebook their trips if this was going to cause them to miss their connections and would try to get hotel rooms for people who would opt to wait for the flight at 6 AM out of Burlington for IAD the next day.  However, as she pointed out, there were no hotel rooms available in Burlington, due to a couple of college class reunions and a car show going on (we learned later that what this really meant was that the one hotel that took United vouchers was full and United won't pay for any other hotel -- or at least that was the message).  After a hand-full of people made their way to the gate to get re-booked, the gate attendant came back on the PA at around 6:30 PM to announce that the plane had taken off from IAD and would land in Burlington around 8:00 PM.  It would take about 20 minutes to turn the plane around, which would mean we would get into IAD at around 10:00 PM (not bad, since this is only an hour later than originally scheduled, if you overlook the whole "maintenance" issue with the plane that delayed it in the first place).  Again, the gate attendant asked if anyone needed to re-book and spent roughly the next hour helping customers (we found out later that she worked for Delta, but had to cover the United gate).

Take It For a Test Drive

The funny thing about all of this is that the 2 gate attendants
and the 4 TSA workers couldn't go home until we took off,
so they were definitely motivated to get us out of there.
At 8:00 PM, the plane from IAD landed and the passengers deplaned.  There were about 30 of us left still waiting to get on the flight back to IAD.  Just as all of the passengers got off the plane, one of the workers from the tarmac came in and took the PA microphone.  He announced that there was another "maintenance" problem with the plane and it would take two hours to test.  He would get back to us at that time with an update.  In retrospect, this was the point where I should have just gone down stairs, rented a car, and drove the 10 hours from Burlington to my house outside of IAD.  Needless to say, this news was met with a lot of complaints and comments.  Again, a group of people tried to get re-booked and called around to find their own hotels.  Luckily for me, Burlington International Airport has free WiFi, so I kicked back, plugged my laptop in and connected to my SlingBox (just to give you a sense of how long the delay was, I was able to watch two episodes of Family Guy and the first National Treasures movie in that time).  Also during this time, one of the TSA workers "hit the wrong button" during a test that sounded an alarm that ordered "all TSA personel to secure all exits" to the terminal "immediately" (they quickly told us that this was an accident.  At around 10:00 PM the same ground crew member came in to announce that the "tests went well" and they just needed to take the plane "out for a test drive".  Let me stop here and highlight how ridiculous this was.  I have never heard of this before.  They literally taxied the plane out on to the runway (with no passengers), started the engines up full speed and proceeded to drive around the runway "testing" the aircraft.  This went on for about an hour.  During this time, I believe that one of the gate attendants ran home to get her dog (or at least take care of her dog), while the other covered for her.  At around 11 PM, the plane made its way back to the gate, but ran into another road block.  While they were out joyriding in the plane, another plane had landed and pulled up to our gate to deplane.  Our plane had to wait on the tarmac for this to unfold.  So, at about 11:20, our plane pulled back up to the gate.  The ground crew member came back inside an announced that the test drive was a success.  They simply had to run on more final test and fill out some paperwork.  He would get back to us in 35 minutes.  This, obviously, was met with sarcastic comments and groans from the 20 or so of us left in the terminal.  At about midnight, we finally lined up to get on the plane.

The Point of No Return

Our first attempt at landing  kind of went like this... 
I've never seen a plane board that quickly.  There was a good chuckle from the crowd when the gate attendant announced that "premium" customers were welcome to board first.  We got on the plane and settled in about 5 or 10 minutes.  I got a look at the pilots, who were both no older than 25 years old each (this will be important later).  We pushed back from the terminal, taxied out to the runway and then nothing.  We sat.  And sat.  And sat.  After about 30 minutes, the pilot came on the intercom and told us that because it was so late, the air traffic control tower had shut down for the evening.  He told us that normally this isn't a problem, as our plane got transferred to regional air traffic control.  However, because the regional air traffic control was now in charge, we had to wait for all inbound planes to land first before being cleared for take off (this is obviously because they are not local to the air field and therefore can't see what is going on).  So we waited.  And waited.  At a little after 1 AM, we finally took off.  It had been delayed by over 5 hours, but we were finally going to get home, even if it was at 3 AM.  The flight went smoothly (there were a few turbulence) and at around a little after 3 AM, the pilot notified us that we were starting our decent into the Washington metro area.  There were low clouds over the area (we were flying above them) at a couple hundred feet.  However, it was not fog:  you could see through the clouds in spots and see the lights on the ground.  I didn't think anything of it and assumed that we would be on the ground in minutes.

Trust Your Instincts, Luke

Do you have anything to declare?  Yeah, don't fly United.
The plane began its descent.  The landing gear came down. We started getting closer to the ground.  We started to go through the low lying clouds.  Just as I thought that we were going to touch down, the engines went into full throttle, the plane pitched steeply up into the air and banked to the left, back towards Washington, D.C. and the DelMarVa peninsula.  After a few minutes, the captain came on and stated that they could not see the runway at the required height, so they aborted the landing.  They were going to try landing at a different runway (because that runway would magically have no clouds over it).  A few minutes later we made attempt number two, with the same result.  As we began circling, the pilot came on to say that he was in a conversation with air traffic control at IAD and regional air traffic control to see what the next step would be.  I have no doubt that more experienced pilots would have landed that plane.  About ten minutes later, the pilot came back on the intercom to say that we could not land at IAD would be diverted to Allentown, PA.  He assured us that United would put us in hotels and help us arrange travel to IAD or some other destination.  About 25 minutes later, we landed at the airport in Allentown, PA and again sat on another runway.  The pilot told us that they were having trouble finding people at the airport to help us get to the gate.  He also told us that "things were changing over the last 30 minutes" and that now the gate agent was trying to arrange "transportation" to IAD for us.  We got off the plane and made our way inside to the gate area.  After standing around for about 5 minutes, one of the ground crew members from outside came in and got on the microphone.  He told us that he had arranged for buses to show up "in 5 or 10 minutes" to take us to IAD, if we were interested.  The other option was to stay at the airport.  This caught a bunch of people off guard and started asking about a hotel.  He said that United would not pay for a hotel, because we were diverted "due to weather".  My jaw dropped, as did most of the people standing around the podium.  We were diverted "due to weather" because United sent a faulty plane from IAD to Burlington (I wonder how mad the people on that flight would be if they knew the plane they had just gotten off of had 3 hours of maintenance work done on it after they landed).  The reason we hit weather was obviously because of United and the issues to the plane.  This was a clever trick on their part to get out of having to help their customers.  It seems to me, it would have made much more sense to pay $80-90 a person to put the few who wanted to stay in Allentown up for the night and gain a ton of good will.  Instead, they pissed off 20+ customers, who will now tell people about this hellish trip and why you shouldn't fly on United.  I couldn't listen to this nonsense so I headed down stairs to baggage claim where the bus would supposedly meet us.

Get On the Bus!

Am I in Allentown or Las Vegas?
 And so we waited.  Again.  We landed in Allentown at around 3:30 AM.  The buses that were suppose to show up in 5-10 minutes finally showed up around 4 AM.  I walked out of the baggage claim to find two booze cruise/party buses waiting to take us to IAD (complete with functioning interior LED disco lights and the "Vomit/Destruction will cause you to forfeit your $500 deposit" sign).  You couldn't even sleep on this bus, because the seats are on the side, like in the picture to the left.  So, in a darkened party bus, we pulled out of the airport in Allentown, PA for the three hour drive through Harrisburg, Gettysburg, Frederick, and finally Leesburg.  In the middle of the trip at 5:43 AM, United called my house and sent me this e-mail:





I wish I was funny enough to make up that plot twist.  Somehow the flight that didn't get cancelled when it should have been cancelled, got cancelled after I got off it in Allentown, PA.  To add insult to injury, United apparently never updated their web site to say that the flight had been diverted, so people were waiting at IAD wondering what happened.  I'm sure this was classic CYA by United to make sure that they didn't have to pay anything out to those who stayed in Burlington, VT, as they are not liable if it is cancelled due to weather.  So, at 8 AM, we finally pulled into IAD on our party buses.  A little over 13 hours from when I was suppose to leave Burlington, I made it home.  There was no snow.  There was no rain.  There was no hurricane or tornado or typhoon.  Or earthquake.  There was only United Airlines and its terrible customer service.  Not a bad way to spend a Friday night, eh?

Tuesday, September 13, 2011

I Don't Need To Read The Manual...Spring Integration and JMX

Enabling JMX Monitoring with Spring Integration

The title pretty much says it all.  This seems like a pretty simple task, right?  I thought that I would just go to the Spring Integration reference documentation, follow the instructions, and boom, you can see all of your Spring Integration components via JMX from your favorite JMX monitoring client.  If only it were that easy.  The first hurdle I encountered was that the documentation at Spring's site fails to mention how to get the JMX schema included in your integration.xml or where the parsers/handlers live in the Spring library so that it can actually load and parse the integration.xml file.  The second is that there appears to be some typos in it (it should be "jmx:mbean-export", not "jmx:mbean-exporter" and the attributes of that tag are also listed incorrectly). Grr (I guess you get what you pay for). So, without further ado, this is how to turn on the MBean Exporter for Spring Integration:
  1. Declare the "jmx" namespace in your integration.xml file: 
    xmlns:jmx="http://www.springframework.org/schema/integration/jmx"
  2. Add the "jmx" schema to the "schemaLocation" attribute:
    xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
        http://www.springframework.org/schema/integration/jmx http://www.springframework.org/schema/integration/jmx/spring-integration-jmx-2.0.xsd
        http://www.springframework.org/schema/integration http://www.springframework.org/schema/integration/spring-integration-2.0.xsd"
  3. Declare the MBean server bean: 
    
        
    
  4. Declare the Integration MBean Exporter: 
  5. Add the spring-integration-jmx library to your classpath.

Sunday, August 14, 2011

War of the Worlds

Node.JS and IDE Support

Node.JS is a new and exciting evented I/O library for V8 JavaScript.  While the consensus seems to be to use Cloud9ide.com as the IDE of choice to develop Node.JS applications, this may be impractical for a couple of reasons.  First, Cloud9 is an online IDE, which means your source must be hosted on the Internet, either at Cloud9, Bitbucket or Github (I will say that the Github integration @ Cloud9ide.com is pretty nice).  Second, it is a rather limited IDE, which means you will have to do your other development elsewhere (if you only develop in JavaScript, than this isn't such a big deal).  Finally, the Cloud9ide.com IDE does NOT provide Node.JS code-completion for built-in modules (at least it did not at the time of writing this post).  With this in mind, I set out to see how well I could get Node.JS support into Eclipse.  Despite these shortcomings, one of the nice things about Cloud9ide.com is that it allows you to run and debug your application in their cloud, making it very easy to test your application.  Therefore, my goals for finding desktop IDE support for Node.JS was to make sure that it supported everything that Cloud9ide.com has and more.  I settled on the following requirements
  • The ability to run/launch Node.JS applications from Eclipse
  • The ability to debug Node.JS applications from Eclipse
  • Code-completion for Node.JS module
I set out to do all of the above with Eclipse, as it is my favorite IDE of choice.  I was able to achieve all of the goals by following the instructions outlined in this post.  Please keep in mind that the instructions that follow assumes version 0.4.10 of Node.JS and version “Indigo” (3.7) of Eclipse.  All these instructions were tested on Mac OSX 10.6.8.

One IDE to Rule Them All

The first piece of the puzzle is to install Node.JS:

  1. Download the tarball from http://nodejs.org/#download
  2. Untar/unzip the package with tar –xvf
  3. Change into the newly created directory
  4. Run ./configure
  5. Run make
  6. Run make install
  7. Verify node.js is installed by running node –version
Once you have installed and verified Node.JS, the next step (assuming that you already have Eclipse installed) is to install the Eclipse Debugger Plugin for V8 (Google):
  1. Open Eclipse
  2. Select Help > Install New Software…
  3. Click on the “Add…” button
  4. Enter the following information:
    1. Name: Eclipse Debugger Plugin for V8 Update Site
    2. Location: http://chromedevtools.googlecode.com/svn/update/dev/
  5. Click on “OK” to add the update site
  6. In the “Work with:” drop-down box, choose “Eclipse Debugger Plugin for V8 Update Site”. The plugin area should now be populated with the plugins offered by the update site.
  7. Check the box next to “Google Chrome Developer Tools” and click on “Next” to install.
  8. Walk through the wizard and install the plugin. Restart Eclipse when prompted for the changes to take effect.
The next plugin to install is the VJET Plugin from the good folks over at eBay:
  1. In Eclipse, Select Help > Install New Software…
  2. Click on the “Add…” button
  3. Enter the following information:
    1. Name: VJET Update Site
    2. Location: https://www.ebayopensource.org/p2/vjet/eclipse
  4. Click on “OK” to add the update site
  5. In the “Work with:” drop-down box, choose “VJET Update Site”. The plugin area should now be populated with the plugins offered by the update site.
  6. Check the box next to “VJET” and click on “Next” to install.
  7. Walk through the wizard and install the plugin. Restart Eclipse when prompted for the changes to take effect.
At this point, we have all the support we need to create, run, and debug V8 (and therefore Node.JS) applications.  However, this is essentially what Cloud9ide.com provides.  The cherry on top is the Node.JS code-completion support provided by the VJET plugin.  The support is a separate project that needs to be installed in your Eclipse workspace.  The VJET Type Library for Node.JS can be installed by following these steps:
  1. Download the VJET Type Library for Node.JS from http://www.ebayopensource.org/p2/vjet/typelib/NodejsTL.zip
  2. In Eclipse, select File > Import…
  3. In the Import wizard, select General > Existing Projects into Workspace
  4. Select “Next”
  5. Select the “Select archive file:” import option and click on the “Browse…” button
  6. Navigate to the location where the NodejsTL.zip file is saved and select it for import.
  7. Select “Finish” to import the type library.
  8. Verify that the NodejsTL project appears in your Eclipse workspace.
Now we have everything we need to get started creating applications with Node.JS from eclipse.  To create a Node.JS project in Eclipse, follow these steps:
  1. In Eclipse, select File > New > Project…
  2. In the New Project wizard, select VJET > VJET Project and click on the “Next” button.
  3. On the “Create a VJET Project” screen of the wizard, enter the project name and location (leave the default selections for all other input fields). Click on the “Next” button.
  4. On the “VJET Settings” screen of the wizard, click on the “Projects” tab.
    1. Click on the “Add…” button.
    2. Select the NodejsTL project and click on the “OK” button. This will add auto-completion for NodeJS modules/functions.
  5. Click on the “Finish” button to create the project
Assuming that you created a simple Node.JS application, the next step is to try to run your Node.JS application from WITHIN Eclipse:
  1. In Eclipse, select Run > External Tools > External Tools Configurations…
  2. In the External Tools Configurations window, select the “Program” node in the tree display on the left-hand side of the window.
  3. Click on the “New launch configuration” button (appears above the tree as a blank piece of paper with a yellow plus sign in the upper right-hand corner). The right-hand side of the window should populate with the launch configuration screen.
  4. Enter the following information:
    1. Name: Debug Node
    2. Location: /usr/local/bin/node
    3. Working Directory: ${project_loc}
    4. Arguments: --debug ${resource_name}
  5. Click on “Apply” to save the changes
  6. Click on “Close” to exit the “External Tools Configurations” window
  7. To launch the application, select the “Debug Node” configuration under Run > External Tools. Make sure that the .js file that you would normally pass to Node.JS from the command line is selected in the Script Explorer prior to running. Otherwise, you will get errors when Node.JS runs, as it will not know which file to execute.
Note that you can create multiple launch configurations, so if you would like to have one for debugging and one for running, simply duplicate the configuration, give it a new name (like “Run Node”) and remove the “—debug” option from the arguments.  Assuming that you executed step 7 above, you can now attach the V8 remote debugger to the process so that you can set breakpoints and inspect your application:
  1. In Eclipse, select Run > Debug Configurations
  2. In the Debug Configurations window, select the “Standalone V8 VM” node in the tree display on the left-hand side of the window.
  3. Click on the “New launch configuration” button (appears above the tree as a blank piece of paper with a yellow plus sign in the upper right-hand corner). The right-hand side of the window should populate with the launch configuration screen
  4. Enter the following information:
    1. Name: Debug Node 5858
    2. Host: localhost
    3. Port: 5858
    4. Breakpoint sync on launch: Merge local and remote breakpoints
  5. Click on “Apply” to save the changes
  6. Click on “Close” to exit the “Debug Configurations” window
  7. To launch the remote debugger, select the “Debug Node 5858” configuration from the Debug Configurations wizard and click on the “Debug” button. This assumes that the Node.JS process is already running and in debug mode, using the default debug port (5858).
Assuming that the remote debugging configuration connects successfully to your running application,  you can place breakpoints in the code by locating the “virtual project” created by the V8 plugin. To do this, use the following directions (assumes that Eclipse is already open AND the remote debugger configuration created above is currently connected to a running Node.JS application in debug mode):
  1. Change to the VJET JS perspective
    1. If the VJET JS perspective is not open, open it by selecting Window > Open Perspective > Other…
    2. Select “VJET JS” from the list and click on the “OK” button.
  2. Locate the “Debug Node 5858” project that appears in the “Script Explorer” view on the left-hand side of the perspective.
  3. Expand the project and double click on the source file that you would like to set a breakpoint in to open it in the viewer.
  4. Right-click to the left of the line that you would like to place a breakpoint on in the file viewer and select “Toggle Breakpoint” to set the breakpoint.
  5. Interact with the Node.JS application. The application should pause when it hits the breakpoint set in Eclipse.
Note that the virtual project actually lets you see the code from the running Node.JS instance and NOT the source that you imported into Eclipse. In fact, if you just want to use Eclipse for setting breakpoints, you do not even need to import the source. You simply need to create the remote debugger configuration and set breakpoints in the virtual project once the remote debugger has connected to a running Node.JS instance in debug mode.  According to the V8 documentation (links below), you can make Eclipse actually honor the breakpoints set in your project.  However, I was not able to get this to work (and since the process is running from your code in the workspace anyways, the Virtual Project is actually already pointing at the same source files).  And that's it!  You now have the ability to create, run, and debug Node.JS applications from Eclipse with the added benefit of code-completion for the built-in modules in Node.JS.  Also, because the code-completion comes from a project imported into Eclipse, you can always modify it to add additional support for internal libraries, etc.  Below is a list of resources that I used to figure this all out:

Node.JS
Using Eclipse as a Node.JS Debugger
Eclipse Debugger Plugin for V8

Eclipse Debugger Plugin for V8 Tutorial
Eclipse Virtual Projects
VJET
Importing VJET JavaScript Type Libraries into Eclipse
Node.JS Step-by-Step

Sunday, July 31, 2011

Keep Your Hands Off of My Whitespace!

We Can Put a Man on the Moon...

Groovy has some awesome XML reading and parsing features that make it a breeze for developers to create new XML strings or to parse existing XML strings.  The XMLSlurper and associated GPathResult classes make it easy to traverse and manipulate the DOM of an XML document/string.  On top of that, the builder support in Groovy (MarkupBuilder, StreamingMarkupBuilder) make it much easier for developers to create structured documents and get essentially built-in commenting for free (since the builder syntax essentially describes the hierarchical document by itself).  With all of these improvements and modern conveniences provided by Groovy regarding XML, you would think that it would be easy to perform the following task:
  1. Read in a file containing XML
  2. Parse the file and find a particular element
  3. Edit the value of said element
  4. Update the file with the changes, preserving the original formatting and namespace(s) of the file.
Good luck.  The builders are great for creating new documents.  While you can use the StreamingMarkupBuilder to handle data read from a file, it does NOT preserve the white-space (and you have to know what additional calls need to be made to preserve any namespaces in the original XML document).  This was a choice made by the implementer, which certainly makes sense for the normal use case of the StreamingMarkupBuilder (creating XML on the fly as a response to a request), where white-space is irrelevant (and takes up precious bytes ;) ).  So, are we just doomed to lose are pretty, human readable formatting when editing XML?  The answer is no.  Luckily, there are some other classes provided by Groovy that will let you do things similar to the normal Groovy XML manipulation approach (slurper, markup builders and GPath).

DOMination

The solution to the problem above is to use the groovy.xml.DOMBuilder and groovy.xml.dom.DOMCategory classes to manipulate XML, while still preserving the formatting/white-space.  Assume that you already have a java.io.File object pointing to an XML file.  You can do the following to manipulate the contents of that file:

    def xml = file.text
    def document = groovy.xml.DOMBuilder.parse(new StringReader(xml)))
    def root = document.documentElement
    use(groovy.xml.dom.DOMCategory) {
        // manipulate the XML here, i.e. root.someElement?.each { it.value = 'new value'}
    }

    def result = groovy.xml.dom.DOMUtil.serialize(root)

    file.withWriter { w ->
        w.write(result)
    }

With 10-15 lines of Groovy code, we have just loaded XML from a file, manipulated its contents, and written it back out to file, while preserving all formatting from the original file.  I wasted about 4 hours trying to figure this out before I stumbled upon the DOMCategory class.  For more information on editing XML using DOMCategory, see the Groovy tutorial on it here.

Wednesday, July 27, 2011

Maven Trick to Rename Grails War

Convention vs. Convention

One of the constant problems with the Grails Maven integration is the competing conventions imposed by the two technologies.  An obvious example of this is the naming convention used for WAR files by the two.  The Maven convention is to use the following when creating a WAR file:
${project.artifactId}-${project.version}.war
When building your Grails application as a WAR file using the Grails command line (i.e. grails prod war), the value of the grails.project.war.file configuration property found in the application's BuildConfig.groovy file is used.  This is obviously not the same convention as the one used by Maven, as described above, and depending on which goals you use with the Grails Maven plugin, you may end up with a WAR named using the Maven convention instead of the Grails convention.  This is because the Grails Maven plugin includes two WAR building Mojos:  GrailsWarMojo and MvnWarMojo.  The former is not tied to any specific phase and is executed if the war goal is executed.  The latter is tied to a phase (package) and therefore is executed automatically if the Grails Maven plugin is included in your POM file and package is specified as a phase to execute (and your project packaging is grails-app).  This Mojo uses the Maven WAR naming convention outlined above.  Therefore, building your Grails application using mvn clean package will result in a WAR named as outlined above.

Cutting Against The Grain

So, now that we know about the two competing conventions, how do we make the Maven build do what we want (that is, how do we make it produce a WAR file named using the Grails naming convention)?  The best solution that I have found is to use the maven-antrun-plugin.  Normally, I don't condone the use of the Ant plugin in Maven, as it is essentially a way to shell out control away from Maven and it is very easy to violate the conventions set for by Maven with this solution.  However, in this case we are trying to break Maven's convention, so the following solution feels acceptable.  To rename the WAR after Maven is done creating it, simply add the following plugin definition to your POM file AFTER the declaration to use the Grails Maven plugin:
    <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-antrun-plugin</artifactId>
        <executions>
            <execution>
                <id>run-ant-rename-war</id>
                <phase>package</phase>
                <goals>
                    <goal>run</goal>
                </goals>
                <configuration>
                    <tasks>
                        <move file="${project.build.directory}/${project.artifactId}-${project.version}.war" tofile="${project.build.directory}/${project.artifactId}.war" />
                    </tasks>
                </configuration>
            </execution>
        </executions>
</plugin>
This will rename (by moving) the WAR produced with the Maven WAR naming convention to the Grails WAR naming convention, leaving it in the target directory (or whatever you have configured via Maven to be the project.build.directory).

Monday, July 18, 2011

Grails web.xml Generation Magic

The Problem

Grails provides a nice feature in each plugin's descriptor file that allows the plugin to make modifications to the web.xml file created by Grails application that includes the plugin at build time.  This can be done by implementing the doWithWebDescriptor closure in the plugin's descriptor file (see the Grails documentation for more information).  This works fine if you have a limited number of Grails plugins in your application that want to modify the web.xml file or have plugins whose changes to the web.xml file do not require some sort of order.  I recently ran into an issue where we needed to make sure that a custom Grails plugin added a servlet filter to the web.xml that came BEFORE the filters added by the Spring Security plugin.  I did not want to modify the Spring Security plugin to make sure its modifications to the web.xml came after the custom plugin's modifications, nor did I want to assume that the plugins would be installed in a particular order by Grails when building the main application.  

The Solution

After realizing that I could not rely on the order that each plugin's doWithWebDescriptor closure would be called, I decided to use the Grails application's BuildConfig.groovy file to make sure that the web.xml file was modified AFTER all plugins had modified the web.xml file.  This would allow the build to re-organize the servlet filters in the web.xml file to ensure they were in the right order (and would also cover the case where one or more of the filters was not added to the web.xml file -- i.e. this solution would work if one or both of the filters are missing from the generated web.xml file).  The trick is to make use of the grails.war.resources closure in the BuildConfig.groovy file.  This closure is called right before the WAR file is created, ensuring that nothing else will modify the web.xml file.  This takes care of the timing issue.  However, I still needed to write some code to actually modify the order of the servlet filters in the web.xml file.  To do this, I made use of the Groovy shell and binding classes:
    grails.war.resources = { stagingDir, args ->
        ...
        updateWebXml("${stagingDir}/WEB-INF/web.xml")
    }

    private def updateWebXml(webXmlPath) {
        Binding binding = new Binding()
        binding.indentity {
            setVariable("webXmlPath", webXmlPath)
        }

        new GroovyShell(binding).evaluate(new File("ModifyWebXml.groovy")
    }


The updateWebXml method uses the GroovyShell object to execute a Groovy script file, named ModifyWebXml.groovy.  This script uses the XmlSlurper class to read in the existing web.xml file and write out the modified one in its place:


    def webXml = new java.io.File(webXmlPath)
    if(webXml.exists()) {
        def origWebXml = new XmlSlurper().parse(webXml)
        def newWebXml = new groovy.xml.StreamingMarkupBuilder().bind { builder ->
            // Create the new web.xml file from the old one!
        }

        webXml.withWriter { w ->
            w.write(groovy.xml.XmlUtil.serialize(newWebXml))
        }
    }


This solution allowed me to create a script that could re-order the contents of the web.xml file and handle all cases with regards to whether or not the servlet filter entries in question are present or not in the web.xml file used as input.  It is also important to note that this solution can be extended to help in any other situation where you need to make last second modifications to files to be included in the WAR file at build time.

Sunday, July 3, 2011

Bending the Spoon: How to build your Grails application with Maven

It Hurts When I Do That...

When starting a new project a few years back, we made the decision to transition from custom Ant scripts to using Maven as our build manager.  This project contained standard Java libraries (JAR files), OSGi bundles, Grails plugins, and of course, a Grails application (WAR file).  We toyed with the idea of just calling our Ant scripts from Maven, but as engineers, we felt that was a cop-out and defeated the purpose of transitioning to a convention-over-configuration build system such as Maven.  However, in order to achieve the transition to Maven we wanted to be able to completely build our Grails application from Maven without using the Grails command-line (this also meant not taking the easy way out and just using the Maven "antrun" plugin to invoke Grails commands).  Along the way, we had to resolve various issues pertaining to conflicts in conventions between Grails and Maven.  This first article will walk through the Maven support built in to Grails, including the Grails Maven plugin and how we achieved complete Grails/Maven integration for our project.  Subsequent articles will focus on "Mavenizing" your Grails plugins and treating them like any other Maven dependency when building your Grails application.  For the purposes of this article, assume Grails 1.3.7 and Maven 2.2.1 (however, the steps could be applied to any version of Grails and Maven 2.x or 3.x).

Know Your Build Needs

The first step when choosing how to build your Grails application is to spend some time familiarizing yourself with the build support in Grails.  At its core, Grails provides dependency management options in combination with build management and packaging scripts out of the box.  These options can be broken down into the following categories:

  • Simple command line build with few or no external dependencies 
  • Command line build with repository managed dependencies 
  • Command line build with Maven managed dependencies (presence of a POM file) 
  • Maven build (build via Maven using Grails Maven plugin, not the Grails command line)

Build Management

The simple build approach is to let Grails manage everything by placing any required third-party library JAR’s in the /lib folder of your Grails application and using the Grails command line tools to build, package and/or run your application or plugin.  The Grails command line tools make use the Grails Gant scripts, which can be found in the $GRAILS_HOME/scripts folder of your Grails installation.  Building your Grails application or plugin is as simple as running:

grails <environment> war
or
grails package-plugin
where <environment> is one of  "dev", "test", or "prod".  More "advanced" users can make use of the built-in dependency resolution options, as described below.

Dependency Resolution

It is important to realize that Grails uses Ivy to manage the resolution of any dependencies required by plugins or listed in your BuildConfig.groovy file.  Ivy’s role is simply to resolve dependencies (libraries) and get them on the classpath and into your WAR/ZIP file when building your application/plugin via the Grails command line tools.  You can declare dependencies (with Maven-style coordinates) in your application’s BuildConfig.groovy file:


    grails.project.dependency.resolution = {
        ...
        dependencies {
            runtime 'com.mysql:mysql-connector:5.1.5'
        }
    }


You also need to be aware that plugins may include JAR's included in its /lib directory or a Dependencies.groovy file that defines libraries required by the plugin when it is installed in your application (this will be very important later when we discuss "Mavenizing" your Grails plugins).  Finally, Grails also supports pointing Ivy at Maven repositories to resolve dependencies in your BuildConfig.groovy file:


    grails.project.dependency.resolution = {
        repositories {
            mvnRepo "http://repo.grails.org/grails/core"
        }
        ...
    }


This is useful when you need to resolve an internal dependency library when prototyping or building a simple application.  If you are developing in an environment where a POM file full of required dependencies and repositories has been provided for you, you can use the pom true option in your BuildConfig.groovy to make the Grails scripts resolve dependencies from the provided pom.xml file in the root of your project:


    grails.project.dependency.resolution = {
        pom true
    }


This option causes the Grails build scripts to look at your POM file for dependency resolution.

Grails Maven Integration

Up to this point, we have focused on looking at the built-in build support in Grails via the Gant scripts and dependency management mechanisms exposed via the BuildConfig.groovy file.  In addition to its “native” build support, Grails also supports building plugins and applications fully via Maven (with some extra TLC).  The Grails core development team maintains a Grails Maven plugin, which provides Maven goals to perform various Grails related tasks (documentation for the plugin can be found here).  It is important to recognize from the beginning that the Grails Maven plugin is essentially a facade around the Grails Gant scripts.  This means that at the point where the Grails Maven plugin begins to execute a goal, control is transferred from Maven to the Grails scripts themselves.  This detail is probably something that is normally considered part of the "black box" and not given much thought, but it is something that gave us a lot of pain at first.  For instance, because the Grails scripts are essentially being "executed" by the Grails Maven plugin, any libraries that the scripts require to run must be placed on the classpath as runtime dependencies.  This means that you must include dependencies in your POM file that are not actually required by your application at runtime (there have been numerous JIRA issues opened regarding this exact issue in the past).  Because it is a wrapper about the Grails scripts, it is also imperative to make sure that the conventions match.  There are four properties in the BuildConfig.groovy that can be set to cause the Grails scripts to produce artifacts in same directories that Maven expects to find its build artifacts:


    grails.project.class.dir = "target/classes"
    grails.project.test.class.dir = "target/test-classes"
    grails.project.test.reports.dir = "target/test-reports"
    grails.project.war.file = "target/${appName}-${appVersion}.war"


The above settings are the default values included in the BuildConfig.groovy file when you generate your project (the "grails.project.war.file" property will be commented out by default). 

Where To Start

The first decision that we made was to create a POM file that would encapsulate all of the Grails dependencies required by our application.  While the Grails Maven plugin provides a goal to create a POM from their archetype (create-pom), I would recommend that you start by hand-rolling your application or plugin's POM file.  The main reason for this is that the dependencies in Grails 1.3.x are a mess and the archetype produces a somewhat incorrect/out-of-date POM file.  By encapsulating these dependencies in one POM file, it would allow us to easily change the dependency set when and if we decided to upgrade the version of Grails used in our application (we were able to upgrade successfully from 1.2.0 to 1.3.7 using this method with minimal changes to our project's POM file).  This POM file looks something like this:


    <?xml version="1.0" encoding="utf-8"?>
    <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
        <modelVersion>4.0.0</modelVersion>
        <groupId>dependencies</groupId>
        <artifactId>grails</artifactId>
        <version>1.0-SNAPSHOT</version>
        <packaging>pom</packaging>

        <dependencies>
            <dependency>
                ....
            </dependency>
        </dependencies>

        ...
    </project>


This is just a simple POM that will cause Maven to pull in all of the Grails dependencies required to build our application or plugin.  It is recommended that you leave the version as a SNAPSHOT to make it easier for you to change the version of Grails or the included dependencies without having to redeploy the POM's for your projects that depend on it.  Through some trial and error of attempting to build a simple Grails application WITHOUT any artifacts (i.e. no controllers, domain classes, etc), we arrived at the initial set of dependencies:

  • org.grails:grails-bootstrap:1.3.7
  • org.grails:grails-core:1.3.7
  • org.grails:grails-crud:1.3.7
  • org.grails:grails-gorm:1.3.7
  • org.grails:grails-scripts:1.3.7
  • net.sf.ehcache:ehcache-core:1.7.1
  • hsqldb:hsqldb:1.8.0.10
  • org.slf4j:slf4j-log4j12:1.5.8 (required by Grails scripts to execute)

The majority of the above dependencies will be required by your application a runtime.  As mentioned earlier, a few of these dependencies are merely required for the Grails scripts to execute properly (more on how to resolve this issue in a bit).  Once you have your Grails dependency POM file created, install it into your local Maven .m2 repository:
mvn install
The next step for us was to figure which dependencies we needed to be included in this pom.  To figure this out, we created a simple Grails application using the Grails command-line tools: 
grails create-app test-app
Once we had created the skeleton project, we packaged the application into a WAR file so that we could see which dependencies the Grails build scripts pull into the WAR:
grails prod war
This produces a WAR file in the "target" directory of the Grails application, which we set aside for later comparison to the WAR file produced by Maven.  Next, in order to built a WAR using Maven for the same test project, we created a new POM file for the test project:


    <?xml  version="1.0" encoding="utf-8"?>
    <project  xmlns="http://maven.apache.org/POM/4.0.0"   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0  http://maven.apache.org/maven-v4_0_0.xsd">
        <modelVersion>4.0.0</modelVersion>
        <groupId>my-company</groupId>
        <artifactId>test-app</artifactId>
        <version>1.0-SNAPSHOT</version>
        <packaging>grails-app</packaging>

        <dependencies>
            <dependency>
                <groupId>dependencies</groupId>
                <artifactId>grails</artifactId>
                <version>1.0-SNAPSHOT</version>
                <type>pom</type>
            </dependency>
        </dependencies>

        <build>
            <plugins>
                <plugin>
                    <artifactId>maven-clean-plugin</artifactId>
                    <version>2.4.1</version>
                </plugin>
                <plugin>
                    <groupId>org.grails</groupId>
                    <artifactId>grails-maven-plugin</artifactId>
                    <version>1.3.7</version>
                    <extensions>true</extensions>
                    <configuration>
                        <nonInteractive>true</nonInteractive>
                    </configuration>
                </plugin>
            </plugins>
        </build>
    </project>


Note that if you do not modify the "app.version" property in the application.properties file, Maven will fail the "validate" phase, complaining that the version numbers do not match.  The simple fix for this is to ensure the version number in the application.properties file matches the version number in your POM file (i.e. 1.0-SNAPSHOT).  Also note the special packaging type for this application ("grails-app").  Finally, notice that we added a configuration block for the Grails Maven plugin to execute grails with the "non-interactive" flag, so that we will not be prompted to enter "Y" during the build, if required.  Once we havd created and saved the pom.xml file to the root of the test Grails project, we built the WAR file with Maven by running: 
mvn clean package
This will produce a WAR file in the "target" directory of your Grails project.  The next step was to extract both WAR files (the one produced by Grails command-line tools and the one built with Maven) to temporary directories in order to compare their contents.

We Don't Need No Stinkin' Dependencies!

Once the contents of the two WAR files had been extracted to different directories, we used a merge/diff tool that can compare directories to look for the differences in the WEB-INF/lib folder in the WAR.  The comparison will tell you a few different things:

  • Libraries that appear ONLY in the WAR produced by Maven need to be EXCLUDED from the Grails dependency POM file we created earlier, DELETED from the WAR by using a trick within the BuildConfig.groovy file OR matched up against a dependency of the same name but different version number and reconciled (i.e., Grails will pull in a different version of the Log4j library than Maven).
  • Libraries that appear ONLY in the WAR produced by Grails should be considered to be missing dependencies in the Maven build and need to be ADDED to the Grails dependency POM file we created earlier.
  • The Spring dependencies pulled in by Grails and Maven are identical, but the JAR files are named differently (the Grails versions are named "org.springframework.aop-3.0.5.RELEASE.jar", while the Maven ones are named "spring-aop-3.0.5.RELEASE.jar").  Verify that all the names match (i.e. "aop", "asm", etc).  Follow the above two bullet points for which need to be included/excluded.

To resolve the dependency soup that you see in the WAR's, follow these steps: 

  1. Add any missing dependencies (those that are in the Grails command-line WAR ONLY) to the Grails dependency POM.
  2. Rebuild the WAR file via Maven.
  3. Re-compare the WAR produced by Maven to the WAR produced by Grails.
  4. Look for duplicate libraries that have different version numbers.  Exclude the conflicts from the Grails dependency POM file and add the correct version dependency to the Grails dependency POM file.
  5. Rebuild the WAR file via Maven.
  6.  Add logic to the project's BuildConfig.groovy to delete the non-runtime dependencies pulled in just for executing the Grails scripts (more on how to do this in a bit).
  7. Rebuild the WAR file via Maven.
  8. Re-compare the WAR's -- the included libraries should now be identical.  If not, repeat until they are identical)

To see the Maven dependency tree, use the following Maven command from the root of your Grails project:
mvn dependency:tree -Dverbose=true

This will output the resolved dependency tree for your project and will help you see how the dependency are being transitively resolved and pulled into the WAR file (I recommend piping this to a file, as the output can be rather long).  The "verbose" flag tells the goal to also print out conflicts, so you can see if two dependencies are pulling in different versions of the same dependency and how/why you are ending up with the dependency in the WAR file (based on Maven's conflict resolution strategy).  It is also recommended that you clean up these conflicts by excluding the dependency that you do not want in your dependency tree.  To exclude dependencies in your POM file, find the parent dependency and add the following block to it:


    <exclusions>
        <exclusion>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
        </exclusion>
    </exclusions> 


Note that the following artifacts will appear only in the Maven WAR and should NOT be excluded via your Maven dependencies POM file (we will take care of removing these later):

  • org.apache.ant:ant:1.7.1
  • org.apache.ant:ant-launcher:1.7.1
  • org.apache.ant:ant-junit:1.7.1
  • org.apache.ant:ant-nodeps:1.7.1
  • org.apache.ant:ant-trax:1.7.1
  • org.grails:grails-docs:1.3.7
  • org.grails:grails-scripts:1.3.7
  • org.grails:grails-test:1.3.7

As mentioned earlier, because of how the Grails Maven plugin invokes the Grails scripts to build the application, some dependencies are required just to run the underlying Grails scripts (listed above).  These dependencies are not required to deploy your WAR (unless you introduce some specific runtime dependency on them).  To clean out any libraries that you do not want in your WAR (such as the ones listed above), you can make use of the resources closure in the BuildConfig.groovy file:


    grails.war.resources = { stagingDir, args ->
        delete(file:"${stagingDir}/WEB-INF/lib/ant-1.7.1.jar")
    }


This closure gets executed by Grails right before it packages up the WAR file.  It calls the Gant "delete" task to remove the file from the staging directory prior to creating the WAR archive.  Simply add all of the libraries that you do not want included in the WAR (a complete list of what needs to be deleted is listed towards the end of this article).  You can even make it a super-set of files to delete, as it should not cause the build to fail if you reference a file that is sometimes included (this is great if you are using profiles in your Maven build to include different dependencies depending on the selected profile).

Putting It All Together

Once we had identified all of the dependencies that needed to be excluded/deleted, it was just a matter of modifying both the project's BuildConfig.groovy file and the Grails dependency POM we created earlier.  Below is a list of all of the files that need to be deleted in order get the two WAR files in sync:

  • ant-1.7.1.jar
  • ant-junit-1.7.1.jar
  • ant-launcher-1.7.1.jar
  • ant-nodeps-1.7.1.jar
  • ant-trax-1.7.1.jar
  • bcmail-jdk14-138.jar
  • bcprov-jdk14-138.jar
  • core-renderer-R8.jar
  • gant_groovy1.7-1.9.2.jar
  • gpars-0.9.jar
  • grails-docs-1.3.7.jar
  • grails-scripts-1.3.7.jar
  • grails-test-1.3.7.jar
  • itext-2.0.8.jar
  • ivy-2.2.0.jar
  • jsr166y-070108.jar
  • junit-4.8.1.jar
  • radeox-1.0-b2.jar
  • servlet-api-2.5.jar
  • svnkit-1.2.3.5521.jar

After all inclusions and exclusions, the Grails dependency POM file should contain the following dependencies (with exclusions):

  • org:grails:grails-bootstrap:1.3.7
  • org.grails:grails-core:1.3.7
    • Exclusion: commons-beanutils:commons-beanutils
    • Exclusion: commons-collections:commons-collections
    • Exclusion: commons-digester:commons-digester
    • Exclusion: commons-pool:commons-pool
    • Exclusion: javax.persistence:persistence-api
  • org.grails:grails-crud:1.3.7
  • org.grails:grails-gorm:1.3.7
  • org.grails:grails-scripts:1.3.7
  • org.aspectj:aspectjweaver:1.6.8
  • commons-beanutils:commons-beanutils:1.8.0
    • Exclusion: commons-logging:commons-logging
  • commons-codec:commons-codec:1.4
  • commons-collections:commons-collections:3.2.1
  • commons-pool:commons-pool:1.5.5
  • net.sf.ehcache:ehcache-core:1.7.1
  • hsqldb:hsqldb:1.8.0.10
  • jstl:jstl:1.1.2
  • log4j:log4j:1.2.16
  • org.slf4j:slf4j-log4j12:1.5.8
    • Exclusion: log4j:log4j
  • taglibs:standard:1.1.2

We now had a Grails dependency POM that would help us build a WAR of our application using Maven and not the Grails command line.  This POM, in combination with the modifications to BuildConfig.groovy to remove unnecessary dependencies, produced a WAR that contains the exact same runtime dependencies as the Grails command-line tools.  The only other piece that we added to our project POM file to make the results more Maven friendly was the use of the Antrun plugin (ugh, I know) to rename the WAR to drop the version number:


    <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-antrun-plugin</artifactId>
        <version>1.6</version>
        <configuration>
            <tasks>
                <move file="${project.build.directory}/${project.artifactId}-${project.version}.war" tofile="${project.build.directory}/${project.artifactId}.war" />
            </tasks>
        </configuration>
    </plugin>


But Wait...There's More!

Once we had the capability to build a skeleton Grails project with Maven, we began the task of de-conflicting all the other libraries being resolved by Maven when we started adding our code (and other internal libraries) to the project.  Our friend in this battle was the Maven dependency plugin and use of the "tree" goal (described earlier).  This approach worked for us when building against Grails 1.2.0 and 1.3.7.  Early indications are that a similar approach will work with Grails 2.0 (formerly known as 1.4.x).  Keep in mind that the solution presented above does not come without its fair share of hacks (like making use of the BuildConfig.groovy file to remove dependencies injected via Ivy, etc).  However, as long as Grails has support for Ivy built in to its build infrastructure to resolve dependencies, these workarounds will be necessary when attempting to build a Grails application or plugin with Maven.  Finally, I have uploaded the complete sample Grails dependency POM file and the complete sample Grails application POM file:  Grails POM Files.  In my next post, I will take a look at how to get your Grails plugins to be resolved as Maven dependencies when building your application via Maven.  Our solution involved some of the tricks that you have already seen in this post and some custom Maven plugin work.  While our approach ultimately gave us true Maven build support in regards to Grails plugins, my goal is to make the current Grails Maven plugin work out of the box in regards to plugins so that we do not need to create a custom plugin just to help with them, but that is a story for another time.