DataQuest – Free Browser-based Learning for Data Science

DataQuest is a recently launched online data science learning platform for python. The site consists of a gamified series of missions that increase in difficulty as your skills progress. Here are a few other features of the site.

  • Sample Code
  • Live, Interactive Browser-based Coding Environment
  • Step by Step Instructions
  • Instant Feedback
  • Helpful Forums for Q&A

The site is still under development and the founder, Vik Paruchuri, is looking for help developing more content and missions for the site. If that is something of interest to you, get in touch with Vik via the DataQuest website.

Tomorrow is Data Innovation Day

Tomorrow, Jan 22, 2015, is Data Innovation Day 2015 and a free online conference will be held. A very strong group of speakers and panels are planned. The topics of the talks are:

  • Data For Public Good
  • OpenData
  • Internet of Things
  • Analytic Innovations
  • Startups

The conference is four hours from 12:00 PM – 4:00 PM EST on Jan. 22, 2015. Register now to attend the free conference.

Next.ML Machine Learning Conference

If you are based near San Francisco and interested in machine learning, the Next.ML conference is going on this weekend, January 17, 2015. The conference is a bunch of workshops covering the latest trends in:

  • DEEP LEARNING
  • PROBABILISTIC PROGRAMMING
  • PARALLEL LEARNING
  • JULIA
  • OTHER MACHINE LEARNING TOPICS AND TOOLS

The lineup of speakers is great, coming from places like MIT, Facebook, Stanford, Domino Data Labs, and others. Bring your laptop because all participants will leave the conference with lots of great software and datasets.

Note: If you would like to attend the conference, you can use the coupon code “media” to save 30% off the conference admission.

The Goal is Data Products: Now How Do We Get There?

The primary output of data science is data products. Data products can be anything from a list of recommendations to a dashboard to a single chart or any other product that aides in making a more informed decision. In the end, data science should produce some usable results, and those results are the data product. The process used to created those data products needs a bit more formalization. Call it a: methodology, process, lifecycle, or workflow; but it needs to exist.

Dr. Kirk Bourne provided some thoughts in July 2014 with his article, Raising the Standard in the Big Data Analytics Profession. Data science needs some standards and possibly even a workflow, but the focus on data products cannot be lost.

burndown chart

Data Science is not Software Engineering

First, data science is often treated as software engineering because code is written. However, they are not the same thing. Agile methods, waterfall, and scrum are not pluggable methodologies that can be used with data science. Data science is more science and less engineering; therefore it should follow a more scientific method.

Existing Data Science Workflows

Luckily, some options already exist for data science. Much like software engineering, there is not a magic workflow that fits every project. The goal is to find a workflow that best fits the needs of the current project.

CRISP-DM

The most popular and oldest method is CRISP-DM. CRISP-DM was designed for data mining projects, which is closer to data science than software engineering, but still not exact. The 6 steps of CRISP-DM are:

  1. Business Understanding
  2. Data Understanding
  3. Data Preparation
  4. Modeling
  5. Evaluation
  6. Deployment

Data Science Project Lifecycle

The The Data Science Project Lifecycle is a recent modification/improvement of CRISP-DM with a bit more of an engineering focus. The steps can be seen as:

  1. Data acquisition
  2. Data preparation
  3. Hypothesis and modeling
  4. Evaluation and Interpretation
  5. Deployment
  6. Operations
  7. Optimization

Data Science Workflow

The Data Science Workflow: Overview and Challenges was presented on the ACM blog in 2013. It was part of a dissertation by Philip Guo. Here are the steps:

  1. Preparation
  2. Analysis
  3. Reflection
  4. Dissemination

Those are 3 options of workflows for data science. They are not the only options. Feel free to modify the workflows to best suit the project. It will be exciting to see the new workflows for data science that will be created in the near future. It will also be fun to see which ones turn out to be the most beneficial.

One thing a data product must do is help answer a question. Thus, a logical staring point for data science is a good question. Just don’t let the focus of the workflow come down to the process, which is often the case in software engineering. Let the focus be on data products.


Note:
I have previously written 2 posts on this topic, and I don’t think either post gets the methodology exactly correct.

Learn some Deep Learning in 2015

Here are some great resources to kickstart your deep learning.

Want to make an impact of social good this summer?

Again this summer, the University of Chicago is hosting the Data Science for Social Good (DSSG) Fellowship Program. DSSG is a 12-week training program for aspiring data scientist interested in working on problems in the non-profit and government sectors. The program is collaborative, creative, and project-based. All the fellows work on real problems from organizations seeking a social impact. And yes, fellows are paid and they receive a housing stipend.

DSSG looks for the following characteristics is applicants:

  • Passion for doing social good
  • Preferably a graduate student or a recent grad
  • Some programming, stats, and data analysis skills (don’t have to be an expert)
  • For more see the FAQs page

The deadline to apply for the fellowship is Feb 1, 2015.

If you are not new to data science, DSSG is also looking for mentors to help guide and lead the fellows.