Machine Learning with Python

Free Reservation button

Data science is now playing more and more important role in the era of big data. The posted jobs are more than the applicants for data scientists' job in the current job market. One of important reason for that is most of data analysts can only use SAS to do data analysis in non-Hadoop environment and don't know how to use any open source tools (such as R, Python, Scala, etc.) to do analysis.

 

However, in Canada, open source tools get more and more popular across all industries, and will be over SAS quickly for data analysis in industry. For instance, 5 big Banks, Telecom and consulting companies are using Python, R or Scala to do big data analysis and modeling instead of using SAS in Hadoop. Skills in SAS are no longer attractive to employers as before, instead the open source tools become required and more attractive to employers. For the popularity of programming language, you can get details at http://www.tiobe.com/tiobe_index. In terms of TIOBE index report 2016, you can see Python moved up three spots within the last year to claim the number 5 spot. Meanwhile, R was ranked to 16, while SAS just dropped to number 21. Data Scientist is a brand new role vs. previous data analyst, and more opportunity and more promising and more paid. Capture this opportunity with good preparation, and don't miss it! Please go to any job search website, and try to search for Data Scientist to feel how the job is so hot and so demanding.

 

In order to fit the needs of data scientist job market, this course intends to provide required knowledge and skills to help data analyst optimize Data Science learning path to successfully transit into data scientist. The topics in this course come from an analysis of real requirements in data scientist job listings from the biggest tech employers. The course will not only introduce you step-by-step to the process of installing the Python interpreter and data ingestion/wrangling, but also guide you from end-to-end to develop models with machine learning in Python.

 

The course is created around three themes designed to get you started and using Python for applied machine learning effectively and quickly. These three parts are as follows:

 

Lessons: Learn how data can be processed in Python (Python fundamental), and how machine learning project map onto Python and the best practice way of working through each task (Python advance – machine learning) through two sessions

 

Projects: Tie together all of the knowledge from the lessons by working through case study data processing and predictive modeling problems

 

Recipes: Apply machine learning with a catalog of standalone recipes in Python are provided as bonus, which you can copy-and-paste as a starting point for your new projects

 

Who is this course designed for:

·         Anyone without prior coding or scripting experience but with science/engineering/finance background and aspiration to be data scientists

·         New graduates with science/engineering/finance background, and would like to exploit Python to perform data science operations

·         Developers and programmers who intend to expand their knowledge and learn about data manipulation and machine learning

·         SAS programmer in the finance, telecom or other non-tech industries who want to transition from reporting or data cleaners into the data scientist role

You can seek and do data scientists job after mastering all you learnt from this course with confidence

 

Part 1: Introduction to Machine Learning with Python

  • Inclination
  • Lessons
  • Projects
  • Recipes
  • What You Learn From This Course

·         FAQ

Part 2: Introducing EDA

·         Understand Data With Descriptive Statistics

·         Understand Data With Visualization

·         The Detection and Treatment of Outliers

o   Univariate outlier detection

o   EllipticEnvelope

o   OneClassSVM

 

·         Pre-Process Data

o   Data Transforms

o   Rescale Data

o   Standardize Data

o   Normalize Data

o   Binarize Data

 

·         Dimensionality Reduction

o   The Covariance Matrix

o   Principal Component Analysis (PCA)

o   RandomizedPCA

o   Latent Factor Analysis (LFA)

o   Linear Discriminant Analysis (LDA)

o   Latent Semantical Analysis (LSA)

o   Independent Component Analysis (ICA)

o   Kernel PCA

·         Exercise

Part 3: Feature Selection

·         Univariate Selection

·         Recursive Feature Elimination

·         Stability and L1 Based Selection

·         Feature Importance

·         Exercise

Part 4: Resampling Methods

·         Train and Test Sets.

·         K-fold Cross Validation.

·         Leave One Out Cross Validation.

·         Repeated Random Test-Train Splits

 

Part 5: Algorithm Evaluation Metrics

·         Classification Metrics

o   Classification Accuracy

o   Logarithmic Loss

o   Area under ROC Curve

o   Confusion Matrix

o   Classification Report

 

·         Regression Metrics

o   Mean Absolute Error

o   Mean Squared Error

o   R-square

Part 6: Algorithm Selection for Classification

·         Linear Machine Learning Algorithms

o   Logistic Regression

o   Linear Discriminant Analysis

·         Nonlinear Machine Learning Algorithms:

o   K-Nearest Neighbors.

o   Naive Bayes.

o   Classification and Regression Trees.

o   Support Vector Machines

·         Exercise

 

Part 7: Model Techniques Exploration for Regression

·         Linear Machine Learning Algorithms

o   Linear Regression.

o   Ridge Regression.

o   LASSO Linear Regression.

o   Elastic Net Regression

 

·         Nonlinear Machine Learning Algorithms:

o   K-Nearest Neighbors.

o   Classification and Regression Trees.

o   Support Vector Machines

·         Exercise

 

Part 8: Champion Algorithm Determination

·         How to formulate an experiment to directly compare machine learning algorithms

·         A reusable template for evaluating the performance of multiple algorithms

·         How to report and visualize the results when comparing algorithm performance

·         Exercise

Part 9: Pipelines Machine Learning Work Flows Automation

·         How to use pipelines to minimize data leakage.

·         How to construct a data preparation and modeling pipeline.

·         How to construct a feature extraction and modeling pipeline

·         Exercise

Part 10: Ensemble Methods

·         Bagging. Building multiple models from different subsamples of the training dataset

o   Bagged Decision Trees

o   Random Forest

o   Extra Trees

 

·         Boosting

o   adaBoost

o   Stochastic Gradient Boosting

 

·         Voting. Building multiple models (typically of differing types) and simple statistics (like calculating the mean) are used to combine predictions

·         Exercise

Part 11: Algorithm Parameter Tuning

·         The importance of algorithm parameter tuning to improve algorithm performance.

·         How to use a grid search algorithm tuning strategy.

·         How to use a random search algorithm tuning strategy

·         Exercise 10

Part 12: Save and Load Machine Learning Models

o   Finalize Your Model with pickle

o   Finalize Your Model with joblib

Part 13: Summary

Part 14: Projects

·         Predictive Modeling Project Template

o   Use A Structured Step-By-Step Process

o   Machine Learning Project Template in Python

o   Machine Learning Project Template Steps

o   Tips For Using The Template Well

 

·         Project 1: The Hello World (multiclass classification model)

·         Project 2: Classify Sonar Targets (binary classification model)

·         Project 3: Sales Of Product Prediction (regression model)

 

·         Project 4: Loan Default Prediction (binary classification model)

Free Reservation button
Last updated: | -- | Powered by WECAN.ca CMS