Tag Archives: kaggle

Kaggle Launches New Products

If you follow the blog, you probably know I am a big fan of Kaggle. Just last week, they announced the launch of 2 new products.

  1. Kaggle Recruit In this competition, the participants are not competing for a cash prize but rather a job interview with a specific company. Currently, Facebook is hosting the first such competition.
  2. Kaggle Prospect In this competition, the participants are trying to come up with the best question to ask. Participants are presented with various related datasets, and the goal is to find which data science question should be asked of the data. The winner gets a small cash prize, and the winning question becomes a regular kaggle competition.

What do you think? Are you excited to try out these new competitions?

Increase Your Kaggle Score With a Random Forest

Previously, I blogged about submitting your first solution to Kaggle for the Biological Response Competition. Well, that technique used Logistic Regression and the resulting score was not very good. Now, let’s try to improve upon that score. In this example, we will use what is called a Random Forest. Kaggle claims that random forests have performed well in many of the competitions.


There is no setup required beyond what was done when submitting your first solution. This technique will also use python as the software tool and the same data and directory structure.

The Random Forest Code

Scikit-learn, the machine learning library for python, has a nice implementation of a random forest. Here is some python code to run the random forest. A special thanks to Ben Hamner for supplying the basic code.

#!/usr/bin/env python

from sklearn.ensemble import RandomForestClassifier
import csv_io
import scipy

def main():
#read in the training file
train = csv_io.read_data("train.csv")
#set the training responses
target = [x[0] for x in train]
#set the training features
train = [x[1:] for x in train]
#read in the test file
realtest = csv_io.read_data("test.csv")

# random forest code
rf = RandomForestClassifier(n_estimators=150, min_samples_split=2, n_jobs=-1)
# fit the training data
print('fitting the model')
rf.fit(train, target)
# run model against test data
predicted_probs = rf.predict_proba(realtest)

predicted_probs = ["%f" % x[1] for x in predicted_probs]
csv_io.write_delimited_file("random_forest_solution.csv", predicted_probs)

print ('Random Forest Complete! You Rock! Submit random_forest_solution.csv to Kaggle')

if __name__=="__main__":

Raw code can be obtained here. (Please use the raw code if you are going to copy/paste). Now save this file as random_forest.py in the directory (c:/kaggle/bioresponse) you previously created.

Running the code

Then open the Python GUI. You may need to run the following commands to navigate to the correct directory.

import os

Now you can run the actual random forest python code.

import random_forest


Now upload random_forest_solution.csv to Kaggle and enjoy moving up the Leaderboard. This score should place you at or near the random forest benchmark. As of today (5/30/2012), that score is about in the middle of the Leaderboard. Note: as the name implies, a random forest has a bit of randomness built into the algorithm, so your results may vary slightly.

Once again if you performed these steps, I would love to know about it. Thanks for following along, and good luck with Kaggle.

Your First Kaggle Submission

Yesterday, I wrote a post explaining the Kaggle Biological Response competition. If you don’t know, Kaggle is a website for data science competitions. Now it is time to submit a solution. After this post, you should have a spot on the Leaderboard. Granted, it will not be first place but it won’t be last place either. If you have not already done so, please create an account at Kaggle.

Setup Python

For this example, we can use the Python programming language. You will need to perform the following steps to get going. These steps are for Windows machines, but they could very easily be modified for a UNIX/Linux/MAC system.

  1. Install Python 2.7.3 – you need the programming language
  2. Install numpy – for linear algebra and other stuff
  3. Install scipy – for scientific calculations
  4. Install setuptools – easier python package installation
  5. Install scikit-learn – machine learning for python

Setup A File Structure And Get Data

Next create a directory on your C drive. Call it whatever you want. I recommend C:/kaggle/bioresponse. Then download and save the file csv_io.py for reading and writing CSV files. Thanks to Ben Hamner of Kaggle for that file. Next, go download the test and train files from Kaggle and save to your directory.

The Default Solution

If you opened the test.csv file, you would have noticed it has 2501 rows of actual data. Thus, a very simple default solution is to create a submission file with 2501 rows and the number 0.5 on each row. Then go to Kaggle and upload the submission file. I will not provide code for creating that file. There are many ways to do it manually or programatically. This solution will get you on the Leaderboard near the bottom, but not last.

A Logistic Regression Solution

Now, if you know a little statistics, you will recognize this problem as a classification problem, since the observed responses are either 0 or 1. Thus logistic regression is a decent algorithm to try. Here is the Python code to run logistic regression.

#!/usr/bin/env python

from sklearn.linear_model import LogisticRegression
import csv_io
import math
import scipy

def main():
#read in the training file
train = csv_io.read_data("train.csv")
#set the training responses
target = [x[0] for x in train]
#set the training features
train = [x[1:] for x in train]
#read in the test file
realtest = csv_io.read_data("test.csv")

# code for logistic regression
lr = LogisticRegression()
lr.fit(train, target)
predicted_probs = lr.predict_proba(realtest)

# write solutions to file
predicted_probs = ["%f" % x[1] for x in predicted_probs]
csv_io.write_delimited_file("log_solution.csv", predicted_probs)

print ('Logistic Regression Complete! Submit log_solution.csv to Kaggle')

if __name__=="__main__":

Raw code can be obtained here (Please use the raw code if you are going to copy/paste).
Save this file as log_regression.py in the directory you created above. Then open the Python GUI. You may need to run the following commands to navigate to the correct directory.

import os

Now you can run the actual logistic regression.

import log_regression

Now upload log_solution.csv to Kaggle, and you are playing the game.


If you performed these steps, I would love to know about it. Thanks for following along, and good luck with Kaggle.

Get Started With Kaggle – Description

Yesterday, I posted about the popularity of data hackathons. Well, today let’s get started with Kaggle. This is the first of a few simple posts about making your first submission to a Kaggle competition. I also promise you won’t be last place. You won’t be first either. This is an excellent way to start developing your data science skills.

The Problem

The Biological response competition seems to be a good starting point. The data is fairly straight forward. The data consists of rows and columns. Each row represents a molecule. The first column represents a biological response, and the remaining 1776 columns are features of the molecule (technically, calculated molecular descriptors). Unfortunately, the data does not specifically state what each column represents. Thus, domain knowledge of biology is not really helpful.

The Data

For this problem, Kaggle provides 2 sets of data. The first file is a training set. It includes data with responses and features. Obviously it is used for training your algorithm. The actual responses are either the value 0 or the value 1. The second file is very similar except it does not contain the responses. It is called the test file.

How To Submit A Solution

Your goal as a participant is to run your algorithm against the test file and predict the response. Each predicted response should be a value between 0 and 1. After your algorithm runs it should produce an output file with the predicted response for each row on a separate line. Your submission file is just a single column.

The Ranking

To submit a solution, you just upload your submission file. Kaggle then compares your predicted responses with the actual responses for the test set. Kaggle knows those values, but they do not share them with participants. The comparison method used for this competition is called Log Loss. For a description of Log Loss, see the Kaggle Wiki Page about scoring metrics. The goal of this competition is to get the lowest score.
Note: only 2 submissions are allowed per day.

You Can Do It

That is my brief description of a Kaggle Competition. It doesn’t sound too hard does it? Tomorrow, we can step through making our first submission. Go register for an account, so you are ready to submit a solution tomorrow. Be careful, once you start Kaggling (I think I just invented that word), you might not want to stop.

Hackathons with Data are Everywhere

It seems that competitions and meetups for hacking data are all over the place. Coding challenges have been around for a long time. Recently, it appears that data is being thrown into the mix. I think the idea is great. Instead of just hacking some app, why not hack with some data that might help people?

GitHub just concluded with the GitHub Data Challenge. Also, the World’s first ever global data science hackathon occurred last month. The Silicon Prairie has even hosted a couple data-centric hackathons. The Omaha World Herald newspaper organized Hack Omaha to turn government data into something useful. Open Iowa also did a very similar thing for developers, designer, and data-junkies in the Des Moines area. I did not personally attend either of these events, but I was surprised to see these similar types of events occurring in the midwest.

DataKind is busy organizing data dives all over the country, and Kaggle is currently organizing data science competitions for anyone regardless of location. By the way, Kaggle has become one of my favorite sites, and I will be blogging soon about how to quickly get involved.

Anyhow, it appears hackathons with data are here to stay. What data hackathons or competitions do you know about? Are you planning to attend?

Data Scientists Get Ranked – NYTimes.com

I’ve never had a Google account, and I pay to use e-mail […] I know how much information they can collect, and how much people can learn from your clicks.

by Sergey Yurgenson, one of Kaggle’s top competitors as listed in this article from the NewYorkTimes.com, Data Scientists Get Ranked. The article does also list some of the similarities among the top Kaggle competitors.

Kaggle Makes Data Science Fun

The tag line for Kaggle is “We’re making data science a sport.”  They have successfully created a way to turn data science into a competition.  It is both fun, and it yields excellent results.  There is also a portion of the site dedicated for classroom use.  It is called Kaggle in Class.

Here is how it works.  A company that needs some data analyzed can contact Kaggle and host a competition.  Then data scientists all over the world can compete to find the best solution. The company benefits from having many algorithms and techniques applied to the same data set.  Many more algorithms are applied than what the company could run without Kaggle.  The contestants benefit from networking, pre-cleaned data, and learning from others.  It is a win/win situation. Plus, the winner gets prize money.

Currently, the large featured competition is the Heritage Health Prize. It is a $3,000,000 competition to identify individuals that will be admitted to the hospital in the next year.  The contest lasts until April 2013.

This is definitely a site I want to be involved with in the future.  I just wish they could make it a spectator sport as well.

What Makes a Good Data Scientist?

Jeremy Howard is the Chief Scientist at Kaggle. At the end of this interview, from the Strata Conference 2012, he identified 4 simple traits that a data scientist needs.

  1. Creativity
  2. Open-mindedness
  3. Tenacity
  4. A Good Skillset

Jeremy Howard of Kaggle at Strata 2012

In this brief interview he covers a range of other data science topics:

  • Big Data is an engineering problem
  • Analytics generate value/insight from data
  • Predictive Modeling is about answering a question – build a model to do that
  • Is Data Science about tools or people? – watch the video for Jeremy’s answer
  • And others…

See this previous post for more videos from Strata 2012.