# Machine Learning

Machine Learning is a science of using algorithms to make predictions based on previous observations which enable a business to make data-driven decisions. If you are curious about Artificial intelligence, smart data analysis, and how machines can be used to predict decisions; enroll for Innostem Edunce Edlabs Machine Learning course in Bangalore. ### MACHINE LEARNING COURSE CONTENT

This course teaches you the different methodologies and algorithms of Machine learning using Python as the base programming language. You will learn from basics with various aspects of Python Data Science, NumPy, Matplotlib, Pandas, and move to advanced concepts of Machine learning such as Supervised and Unsupervised learning, neural networks, Regression and more. Your expertise in Machine Learning will boost your career in the IT field.

##### Basics of Python
• Keywords and Identifiers
• Variables and Data Types in Python
• Standard Input and Output
• Operators Control Flow: If Else
• Control Flow: White Loop
• Control Flow: For Loop, Control Flow: Break and Continue
• Tuples Part
• Tuples Part 2 : Sets
• Dictionary Strings
• Types of Functions and Function Arguments
• Recursive Functions, Lambda Functions, Modules
• Packages, File Handling
• Exception Handling, Debugging Python
##### NumPy
• Numpy Introduction
• Numerical Operations on Numpy
##### Exploratory Data Analysis
• Need and use of EDA
• Exploring the IRIRS Dataset
• 2D Scatter Plot
• 3D Scatter Plot
• Pair Plots. Histogram
• PDF, Univariate Analysis using PDF
• CDF – Cumulative Distributive Function
• Mean, Variance, and Standard Deviation, Median
• Percentiles and Quantiles IQR (Inter Quartile Range) and
• Box- Plot with Whiskers, Violin Plots
• Univariate, Bivariate and Multi-Variate Analysis
• Multivariate Probability Density, Contour Plot
##### Dimensionality Reduction and Visualization
• Introduction to Dimensionality reduction
• Representing Datasets using Row and Column Vectors
• Representation of Datasets as a Matrix
• Data Pre Processing – Feature Normalization
• Mean of Data Matrix
• Column Standardization
• Co-Variance of Data Matrix
• PCA, PCA with a Code Example
##### Principle Component Analysis
• Introduction and use of PCA, Geometric Intuition of PCA
• The mathematical objective function of PCA
• Distance Minimization
• Eigen Values and Eigen Vectors ( PCA): Dimensionality Reduction
• PCA for Dimensionality Reduction and Visualization
• Limitation of PCA and PCA with a Code Example
• Supervised Learning
##### Linear Regression
• Geometric Intuition of Logical Regression
• Squashing using Sigmoid Function
• Objective Function mathematical formulation
• Weight Vector
• L2 Regularization: Overfitting and Underfitting
• L1 Regularization and sparsity
• Probabilistic Interpretation: Gaussian Naive Bayes Loss minimization representation
• Hyperparameter Search: Grid Search and Random Search
• Column Standardization
• Feature Importance and Model Interpretability
• Collinearity of Features
• Test/Run Time Space and Time Complexity
• Real World Cases
• Non-Linearly separable data and Feature Engineering
##### Logistic Regression
• GridSearchCV, RandomSearchCV
• Extensions to Logistic Regression: Generalised Linear Models
##### Neural Networks
• Working of Biological Neurons
• Growth of Biological Neural Networks
• Diagrammatic representation: Logistic Regression and Perceptron, Multi-Layered Perceptron(MLP)
• Notation, Training a Single-Neuron Model
• Training an MLP: Chain Rule Training an MLP: Memorization, Back Propagation, Activation Functions
• Bias-Variance tradeoff, Decision Surfaces, Playground
##### Decision Trees
• Axis Parallel Hyperplanes, Sample Decision Tree
• Building a Decision Entropy
• Building a Decision Tree: Information Gain
• Building a Decision Tree: Gini Impurity
• Building a Decision Tree: Constructing a DT
• Building a Decision Tree: Splitting Numerical Features, Features Standardization
• Building a Decision Tree: Categorical Features with many possible values
• Overfitting and Underfitting
• Train and Run Time Complexity
• Regression using Decision Trees
• Cases, Code Samples
##### Naive Bayes
• Conditional Probability
• Independent Vs Mutually Exclusive Events
• Bayes Theorem with Examples
• Exercise Problems on Bayes Theorem
• Naive Bayes Algorithm
• Toy Example: Train and Test stages
• Naive Bayes on Test Data
• Log Probabilities for Numerical Stability
• Feature Importance and Interpretability
• Imbalanced Data, Outliers, Missing Values
• Handling Numerical Features (Gaussian NB)
• Multiclass Classification, Similarity or Distance Matrix
• Large Dimensionality, Best and Worst Cases
##### Support Vector Machines
• Geometric Intuition
• Why we take values of +1 and -1 for Support Vector Planes
• Mathematical derivation
• Loss Function (Hinge Loss) based Interpretation
• Dual Form of SVM Formulation
• Kernel Trick, Polynomial Kernel, RBF-Kernel
• Domain-specific Kernels
• Trian and Run Time Complexities
• nu-SVM: Control Errors and Support Vectors
• SVM Regression Cases
##### Unsupervised Learning:
• Clustering
• What is Clustering?
• Unsupervised Learning
• Applications
• Metrics for Clustering
• K-Means: Geometric Intuition
• Centroids
• K-Means: Mathematical Formulation: Objective Function
• K-Means: Algorithm
• How to initiate K-Means++.
• Failure Cases/ Limitations
• K-Mediods?
• Determining the Right K
• Code Samples
• Time and Space Complexity
##### Hierarchical Clustering
• Agglomerative and Divisive
• Dendrograms
• Agglomerative Clustering
• Proximity Methods: Advantages and Limitations
• Time and Space Complexity
• Limitations of Hierarchical Clustering
• Code Sample
##### DBSCAN Technique
• Density-based Clustering
• MinPts and Eps: Density
• Core
• Border and Noise Points
• Density Edge and Density Connected Points
• DBSCAN Algorithm
• Hyper Parameters: MinPts and Eps

• Python
• Numpy