Tag Archives: competition

A Couple of Current Data Science Competitions

Decoding Brain Signals

Microsoft has recently announced a machine learning competition platform. As part of the launch, one of the first competitions is the prediction of brain signals. It has $5000 in prizes, and submissions are accepted thru June 30, 2016.

Big Data Viz Challenge

Google and Tableau have teamed up to offer a big data visualization contest. The rules are fairly simple, just create an awesome visualization using at least the GDELT data set. Finalist will receive prizes worth over $5000 and even some will get tours of Tableau and Google facilities. The contest runs thru May 16, 2016.

Midwest Undergraduate Data Analytics Competition

The 2016 Midwest Undergraduate Data Analytics Competition (MUDAC) will be held at Winona State University in Winona, Minnesota on April 2 and 3.

  • What is MUDAC?
    MUDAC is an intense 2-day analytics competition aimed at undergraduate students. Teams compete to solve a problem posed by an external organization.
  • Who can compete?
    Teams of 3 to 4 undergraduate students attending a school in Minnesota, Wisconsin, Iowa, Illinois, North Dakota, or South Dakota
  • Why attend MUDAC?
    • A fun learning experience
    • Friendly competition
    • Teamwork
    • Meet others with similar inteests
    • Learn about data science/analytic careers
    • Practice preparing and giving a presentation
    • Cash prizes for winning
    • Door prizes

The competition also includes a panel discussion with some local data professionals. I am honored to be one of those panelists.

If you attend or teach at a university in the upper Midwest and you are interested in data science, you should strongly consider bringing a team to MUDAC. I hope to see you there.

National Data Science Bowl

Kaggle and Booz | Allen | Hamilton have just launched the National Data Science Bowl. It is a data science competition hosted at Kaggle.

If you are interested in getting started, a tutorial is available in iPython format. Best of Luck!

Increase Your Kaggle Score With a Random Forest

Previously, I blogged about submitting your first solution to Kaggle for the Biological Response Competition. Well, that technique used Logistic Regression and the resulting score was not very good. Now, let’s try to improve upon that score. In this example, we will use what is called a Random Forest. Kaggle claims that random forests have performed well in many of the competitions.

Setup

There is no setup required beyond what was done when submitting your first solution. This technique will also use python as the software tool and the same data and directory structure.

The Random Forest Code

Scikit-learn, the machine learning library for python, has a nice implementation of a random forest. Here is some python code to run the random forest. A special thanks to Ben Hamner for supplying the basic code.

#!/usr/bin/env python

from sklearn.ensemble import RandomForestClassifier
import csv_io
import scipy

def main():
#read in the training file
train = csv_io.read_data("train.csv")
#set the training responses
target = [x[0] for x in train]
#set the training features
train = [x[1:] for x in train]
#read in the test file
realtest = csv_io.read_data("test.csv")

# random forest code
rf = RandomForestClassifier(n_estimators=150, min_samples_split=2, n_jobs=-1)
# fit the training data
print('fitting the model')
rf.fit(train, target)
# run model against test data
predicted_probs = rf.predict_proba(realtest)

predicted_probs = ["%f" % x[1] for x in predicted_probs]
csv_io.write_delimited_file("random_forest_solution.csv", predicted_probs)

print ('Random Forest Complete! You Rock! Submit random_forest_solution.csv to Kaggle')

if __name__=="__main__":
main()

Raw code can be obtained here. (Please use the raw code if you are going to copy/paste). Now save this file as random_forest.py in the directory (c:/kaggle/bioresponse) you previously created.

Running the code

Then open the Python GUI. You may need to run the following commands to navigate to the correct directory.

import os
os.chdir('c:/kaggle/bioresponse')

Now you can run the actual random forest python code.

import random_forest
random_forest.main()

Results

Now upload random_forest_solution.csv to Kaggle and enjoy moving up the Leaderboard. This score should place you at or near the random forest benchmark. As of today (5/30/2012), that score is about in the middle of the Leaderboard. Note: as the name implies, a random forest has a bit of randomness built into the algorithm, so your results may vary slightly.

Once again if you performed these steps, I would love to know about it. Thanks for following along, and good luck with Kaggle.

Your First Kaggle Submission

Yesterday, I wrote a post explaining the Kaggle Biological Response competition. If you don’t know, Kaggle is a website for data science competitions. Now it is time to submit a solution. After this post, you should have a spot on the Leaderboard. Granted, it will not be first place but it won’t be last place either. If you have not already done so, please create an account at Kaggle.

Setup Python

For this example, we can use the Python programming language. You will need to perform the following steps to get going. These steps are for Windows machines, but they could very easily be modified for a UNIX/Linux/MAC system.

  1. Install Python 2.7.3 – you need the programming language
  2. Install numpy – for linear algebra and other stuff
  3. Install scipy – for scientific calculations
  4. Install setuptools – easier python package installation
  5. Install scikit-learn – machine learning for python

Setup A File Structure And Get Data

Next create a directory on your C drive. Call it whatever you want. I recommend C:/kaggle/bioresponse. Then download and save the file csv_io.py for reading and writing CSV files. Thanks to Ben Hamner of Kaggle for that file. Next, go download the test and train files from Kaggle and save to your directory.

The Default Solution

If you opened the test.csv file, you would have noticed it has 2501 rows of actual data. Thus, a very simple default solution is to create a submission file with 2501 rows and the number 0.5 on each row. Then go to Kaggle and upload the submission file. I will not provide code for creating that file. There are many ways to do it manually or programatically. This solution will get you on the Leaderboard near the bottom, but not last.

A Logistic Regression Solution

Now, if you know a little statistics, you will recognize this problem as a classification problem, since the observed responses are either 0 or 1. Thus logistic regression is a decent algorithm to try. Here is the Python code to run logistic regression.

#!/usr/bin/env python

from sklearn.linear_model import LogisticRegression
import csv_io
import math
import scipy

def main():
#read in the training file
train = csv_io.read_data("train.csv")
#set the training responses
target = [x[0] for x in train]
#set the training features
train = [x[1:] for x in train]
#read in the test file
realtest = csv_io.read_data("test.csv")

# code for logistic regression
lr = LogisticRegression()
lr.fit(train, target)
predicted_probs = lr.predict_proba(realtest)

# write solutions to file
predicted_probs = ["%f" % x[1] for x in predicted_probs]
csv_io.write_delimited_file("log_solution.csv", predicted_probs)

print ('Logistic Regression Complete! Submit log_solution.csv to Kaggle')

if __name__=="__main__":
main()

Raw code can be obtained here (Please use the raw code if you are going to copy/paste).
Save this file as log_regression.py in the directory you created above. Then open the Python GUI. You may need to run the following commands to navigate to the correct directory.

import os
os.chdir('c:/kaggle/bioresponse')

Now you can run the actual logistic regression.

import log_regression
log_regression.main()

Now upload log_solution.csv to Kaggle, and you are playing the game.

Results

If you performed these steps, I would love to know about it. Thanks for following along, and good luck with Kaggle.

Get Started With Kaggle – Description

Yesterday, I posted about the popularity of data hackathons. Well, today let’s get started with Kaggle. This is the first of a few simple posts about making your first submission to a Kaggle competition. I also promise you won’t be last place. You won’t be first either. This is an excellent way to start developing your data science skills.

The Problem

The Biological response competition seems to be a good starting point. The data is fairly straight forward. The data consists of rows and columns. Each row represents a molecule. The first column represents a biological response, and the remaining 1776 columns are features of the molecule (technically, calculated molecular descriptors). Unfortunately, the data does not specifically state what each column represents. Thus, domain knowledge of biology is not really helpful.

The Data

For this problem, Kaggle provides 2 sets of data. The first file is a training set. It includes data with responses and features. Obviously it is used for training your algorithm. The actual responses are either the value 0 or the value 1. The second file is very similar except it does not contain the responses. It is called the test file.

How To Submit A Solution

Your goal as a participant is to run your algorithm against the test file and predict the response. Each predicted response should be a value between 0 and 1. After your algorithm runs it should produce an output file with the predicted response for each row on a separate line. Your submission file is just a single column.

The Ranking

To submit a solution, you just upload your submission file. Kaggle then compares your predicted responses with the actual responses for the test set. Kaggle knows those values, but they do not share them with participants. The comparison method used for this competition is called Log Loss. For a description of Log Loss, see the Kaggle Wiki Page about scoring metrics. The goal of this competition is to get the lowest score.
Note: only 2 submissions are allowed per day.

You Can Do It

That is my brief description of a Kaggle Competition. It doesn’t sound too hard does it? Tomorrow, we can step through making our first submission. Go register for an account, so you are ready to submit a solution tomorrow. Be careful, once you start Kaggling (I think I just invented that word), you might not want to stop.

Github Is Cool: They Like Data

Today, GitHub announced the release of archived public activity data called the GitHub public timeline. The dataset can be queried via the Google BigQuery tool.

To make things even more awesome, GitHub is also hosting a Data Challenge. The challenge is to play around with data and create the best visualization possible. You better start now, because the competition ends May 21st. I am not familiar with Google BigQuery so this might be a good time to learn.

This should not surprise anyone. GitHub is always doing cool things, especially for developer-minded people. If you don’t know, GitHub is the best place for hosting your source code.