Monday, October 28, 2013

Philly Cassandra User's Group : Real-time Analytics w/ Acunu

Tomorrow night, we are set to host another Cassandra meetup.  This time, we will focus on real-time analytics on Cassandra (using Acunu).

See the Philly meetup page for more info:

Friday, October 18, 2013

Stoked to be named to the DataStax Cassandra Rebel Elite team!

Thanks to the Apache Cassandra community and to the crew at DataStax.

I challenge anyone to come up with a more badass sounding name for such a crew of tech-heads.

I feel compelled to go out and buy this guy:

FOR SALE: Content Management System built on Cassandra

Last spring, I attended the Philly Emerging Technology Event (ETE).   It is always a fantastic event, and this year was no different.  One of the new additions this year was an Enterprise Hackathon.  A few large enterprises brought real business problems to the table and offered prizes to help solve them.

The one that caught my eye was the problem of content management for Teva, a large pharmaceutical company.  In looking at the problem of content management, it occurred to me that enterprises these days are dealing as much with internally generated content, as they are externally generated content.

With that in mind, I thought it would be fun to build an application that added the dimension of externally generated content to a content management system.  I took a few weeks and built Skookle. (a play on "Schuylkill", which is the main river through Philly)

Skookle is a new perspective on CMS.  It has an HTML5 user interface that allows users to drag and drop files onto their browser (like dropbox, but through the browser).  The file is persisted, with versioning, into Cassandra using Astyanax chunked object storage, and the description is indexed using Elastic Search.  Skookle is also integrated with Twitter.  It watches for mentions of the company, which then show up directly on the user interface.

For a demo, check out this quick video:

Here is the writeup I did that accompanied the submission:

That contains an estimate for the work that remains to make this a product.

I definitely think there is a lot of potential here for anyone that can re-envision content management, incorporating externally generated content.  Traditional solutions tend to be internally focused (i.e. Sharepoint) or externally focused (Sentiment analysis, etc.)  There is room for a tool that can bridge that gap to allow a user (e.g. brand manager) to see what people are saying (including competitors) and echo/reinforce the good content, and counter the bad... in real-time.

If anyone wants to pickup where I left off,  drop me a line .  Along with the vision, I'm selling the code base.  =)

I think its a great little seed of a project/idea.
Email me: bone at alumni dot brown dot edu

Thursday, October 17, 2013

Crawling the Web with Cassandra and Nutch

So, you want to harvest a massive amount of data from the internet?  What better storage mechanism than Cassandra?  This is easy to do with Nutch.

Often people use Hbase behind Nutch.  This works, but it may not be an ideal solution if you are (or want to be) a Cassandra shop.   Fortunately, Nutch 2+ uses the Gora abstraction layer to access its data storage mechanism.  Gora supports Cassandra.  Thus, with a few tweaks to the configuration, you can use Nutch to harvest content directly into Cassandra.

We'll start with Nutch 2.1...  I like to go directly from source:

$ git clone -b 2.1
$ ant

After the build, you will have a nutch/runtime/local directory, which contains the binaries for execution.  Now let's configure Nutch for Cassandra.

First we need to add an agent to Nutch by adding the following xml element to nutch/conf/nutch-site.xml:
 <value>My Nutch Spider</value>

Next we need to tell Nutch to use Gora Cassandra as its persistence mechanism. For that, we add the following element to nutch/conf/nutch-site.xml:
 <description>Default class for storing data</description>

Next, we need to tell Gora about Cassandra.  Edit the nutch/conf/ file.  Comment out the SQL entries, and uncomment the following line:

Additionally, we need to add a dependency for gora-cassandra.  Edit the ivy/ivy.xml file and uncomment the following line:
<dependency org="org.apache.gora" name="gora-cassandra" rev="0.2" conf="*->default" />

Finally, we want to re-generate the runtime with the new configuration and the additional dependency.  Do this with the following ant command:
ant runtime

Now we are ready to run!

Create a directory called "urls", with a file named seed.txt that contains the following line:

Next, update the regular expression url in conf/regex-urlfilter.txt to:

Now, crawl!
bin/nutch crawl urls -dir crawl -depth 3 -topN 5

That will harvest webpages to Cassandra!!

Let's go look at the data model for a second...
You will notice that a new keyspace was created: webpage.  That keyspace contains three tables: f, p, and sc.

[cqlsh 2.3.0 | Cassandra 1.2.1 | CQL spec 3.0.0 | Thrift protocol 19.35.0]
Use HELP for help.
cqlsh> describe keyspaces;
system  webpage  druid  system_auth  system_traces
cqlsh> use webpage;
cqlsh:webpage> describe tables;
f  p  sc

Each of these tables is a pure key-value store.  To understand what is in each of them, take a look at the nutch/conf/gora-cassandra-mapping.xml file.  I've included a snippet below:
        <field name="baseUrl" family="f" qualifier="bas"/>
        <field name="status" family="f" qualifier="st"/>
        <field name="prevFetchTime" family="f" qualifier="pts"/>
        <field name="fetchTime" family="f" qualifier="ts"/>
        <field name="fetchInterval" family="f" qualifier="fi"/>
        <field name="retriesSinceFetch" family="f" qualifier="rsf"/>

From this mapping file, you can see what it puts in the table, but unfortunately the schema isn't really conducive to exploration from the CQL prompt.  (I think there is room for improvement here)  It would be nice if there was a CQL friendly schema in place, but that may be difficult to achieve through gora.  Alas, that is probably the price of abstraction.

So, the easiest thing is to use the nutch tooling to retrieve the data.  You can extract data with the following command:
runtime/local/bin/nutch readdb -dump data -content

When that completes, go into the data directory and you will see the output of the Hadoop job that was used to extract the data.  We can then use this for analysis.

I really wish Nutch used a better schema for C*.   It would be fantastic if that data was immediately usable from within C*.  If someone makes that enhancement, please let me know!