Tagged: MySQL

USPS AIS bulk data loading with Hadoop mapreduce

Today I pushed up some source to Github for a utility I was previously working on to load data from USPS AIS data files into HBase/Mysql using Hadoop mapreduce and simpler data loaders. Source @ https://github.com/bitsofinfo/usps-ais-data-loader

This project was originally started to create a framework for loading data files from the USPS AIS suite of data products (zipPlus4, cityState). The project has not been worked on in a while but I figured I’d open-source it and maybe some folks would like to team up to work on it further, if so let me know!¬†Throwing it out there under the Apache 2.0 license. Some of the libs need updating etc as well, for instance it was originally developed w/ Spring 2.5.

USPS AIS data files are fixed length format records. This framework was created to handle bulk loading/updating this data into a structured/semi-structured data store of address data (i.e. MySql or HBase). It is wired together using Spring and built w/ Maven. A key package is the “org.bitsofinfo.util.address.usps.ais” package which defines the pojos for the records, and leverages a custom annotation which binds record properties to locations within the fixed length records which contain the data being loaded.

Initial loader implementations include both a single JVM multi-threaded version as well as a second one that leverages Hadoop Mapreduce to split the AIS files up across HDFS and process them in parallel using Hadoop mapreduce nodes to ingest the data much faster then just on one box. Both of these obviously operate asynchronously given a load job submission. Ingestion times are significantly faster using Hadoop.

This project also had a need for a Hadoop InputFormat/RecordReader that could read from fixed length data files (none existed), so I created it for this project (FixedLengthInputFormat). This was also contributed as a patch to the Hadoop project. This source is included in here and updated for Hadoop 0.23.1 (not yet tested), however the patch that was submitted to the Hadoop project is still pending and was compiled under 0.20.x. The 0.20.x version in the patch files was tested and functionally running on a 4 node Hadoop and Hbase cluster.

You can read more about the fixed length record reader patch @

https://bitsofinfo.wordpress.com/2009/11/01/reading-fixed-length-width-input-record-reader-with-hadoop-mapreduce/

https://issues.apache.org/jira/browse/MAPREDUCE-1176 

The USPS AIS products have some sample data-sets available online at the USPS website, however for the full product of data-files you need to pay for the data and/or subscription for delta updates. Some of the unit-tests reference files from the real data-sets, they have been omitted, you will have to replace them with the real ones. Other unit tests reference the sample files freely available via USPS or other providers.

Links where USPS data files can be purchased:

https://www.usps.com/business/address-information-systems.htm

http://www.zipinfo.com/products/natzip4/natzip4.htm

Advertisements

AbstractTransactionalJUnit4SpringContextTests Failing to Rollback in MySQL?

Ok so today I was working on some JUnit tests within Spring using AbstractTransactionalJUnit4SpringContextTests (yes, wow what a long class name). For those of you unfamiliar with this class, basically if you extend your unit test class from this class, every @Test annotated test method will run within a transaction, with the default behavior being that a rollback is called after each @Test method’s execution (unless you declare so otherwise). Pretty convenient.

My simple test case was leveraging a DAO that derived from HibernateDaoSupport and the test inserted a few records. When the test was complete, another method annotated with @AfterTransaction would verify that the data did NOT exist in the table, which was expected due to the rollback that occurs after each test case…… well my asserts were failing because Hibernate created the MySQL tables using MyISAM, which does not support transactions. If you encounter this kind of issue, all you have to do is change your Hibernate dialect to use the MySQL5InnoDBDialect (for InnoDB storage which does transactions) within your session factory configuration as such:

...
<property name="hibernateProperties">
 <props>
<prop key="hibernate.dialect">org.hibernate.dialect.MySQL5InnoDBDialect</prop>
...