Tag Archives: paper

7 Important Data Science Papers

It is back-to-school time, and here are some papers to keep you busy this school year. All the papers are free. This list is far from exhaustive, but these are some important papers in data science and big data.

Google Search

  • PageRank – This is the paper that explains the algorithm behind Google search.

Hadoop

  • MapReduce – This paper explains a programming model for processing large datasets. In particular, it is the programming model used in hadoop.
  • Google File System – Part of hadoop is HDFS. HDFS is an open-source version of the distributed file system explained in this paper.

NoSQL

These are 2 of the papers that drove/started the NoSQL debate. Each paper describes a different type of storage system intended to be massively scabable.

Machine Learning

Bonus Paper

  • Random Forests – One of the most popular machine learning techniques. It is heavily used in Kaggle competitions, even by the winners.

Are there any other papers you feel should be on the list?

12 Useful Tips for Machine Learning

Pedro Domingos of the Department of Computer Science and Engineering at the University of Washington provides a very useful paper with tips for machine learning. The paper is title, A Few Useful Things to Know about Machine Learning [pdf].

Below are the 12 useful tips.

  1. LEARNING = REPRESENTATION + EVALUATION + OPTIMIZATION
  2. IT’S GENERALIZATION THAT COUNTS
  3. DATA ALONE IS NOT ENOUGH
  4. OVERFITTING HAS MANY FACES
  5. INTUITION FAILS IN HIGH DIMENSIONS
  6. THEORETICAL GUARANTEES ARE NOT WHAT THEY SEEM
  7. FEATURE ENGINEERING IS THE KEY
  8. MORE DATA BEATS A CLEVERER ALGORITHM
  9. LEARN MANY MODELS, NOT JUST ONE
  10. SIMPLICITY DOES NOT IMPLY ACCURACY
  11. REPRESENTABLE DOES NOT IMPLY LEARNABLE
  12. CORRELATION DOES NOT IMPLY CAUSATION

For details and a good explanation of each, see the paper A Few Useful Things to Know about Machine Learning [pdf].

Also,later this year, Pedro Domingos will be teaching a machine learning course via Coursera. Sign up if you are interested.

Data Science: The Paper that Started it All

Although Tobias Mayer may be known as the first data scientist, he did not coin the term data science. According to Wikipedia, the first use of the term data science was in 2001.

Data Science: An Action Plan for Expanding the Technical Areas of the Field of Statistics was published in the April 2001 edition of the International Statistics Review. The author was William S. Cleveland, currently a Professor of Statistics at Purdue University.

The paper proposes a new field of study named data science. It then goes on to list and explain 6 technical focus areas for a university data science department.

  1. Multidisciplinary Investigations
  2. Models and Methods for Data
  3. Computing with Data
  4. Pedagogy
  5. Tool Evaluation
  6. Theory

For the most part, the paper is still relevant. I did find a couple of good quotes from the paper that deserve comment.

The primary agents for change should be university departments themselves.

That did not happen. The driving agents for change in the data science field have been some of the newer technology/web companies such as LinkedIn, Twitter, and Facebook (none of which even existed in 2001).

…knowledge among computer scientists about how to think of and approach the analysis of data is limited, just as the knowledge of computing environments by statisticians is limited. A merger of the knowledge bases would produce a powerful force for innovation.

I think this statement still applies today. The world is just starting to realize the benefits of merging knowledge from computer science and statistics. There is much more work to do. Fortunately, businesses and universities are working to address the merger.

Have you seen the paper before? What are your thoughts on it?