Skip to main content

Posts

Showing posts with the label machine learning

Hough Lines

consider the image like this we need to find the lines on this image  the basic lines would be horizontal ,vertical and inclined line  we will try to use hough to detect using equations  ok before that we need to understand the image co ordinate  top left is 0,0 assume it is the third quadrant , for  better understanding i have drawn a to show how a pixel is calculated from top left pixel arrangement for image    this is level zoom of more than 1000 , in real world u cannot see the pixel (1pixel of size 1,1)  alright now we can need to see how to find the lines on the first image the triangle on a zoomed in pixel wil look like this  ok we will start with the horizontal line calculation  since the pixel calculation is in the quadrant  i will shift or plot the values in  the first quadrant for my calculation  when shifted will look like this , for horizontal line calculation the theta of r must be at 90 degrees to the horizontal line u can see the value in diagram r = x cos𝛳 + y sin 𝛳

Hough Transform - Introduction

 So what is hough lines or the transform ,  Hough Line Transform is a transform used to detect straight lines. consider a image where u need to find the lines straight lines ,could be horizontal , vertical , diagonal lines in the image u can do that via the hough transform   before that there is something called   Cartesian coordinate system: Parameters: ( m , b ) . Polar coordinate system: Parameters: ( r , θ ) ( r , θ )   if the line equation is y = m * x +b  in the cartesian  x, y  in the polar it is  x = r cos(𝛳) y = r sin(𝛳) . we will see about this in detail  from the diagram u can see the inclined line and 𝛳 are related as x = r cos(𝛳) , the adjacent side of 𝛳 y = r sin(𝛳), the opposite side of 𝛳  ' r ' the distance from the origin the reference point to the line which hits the line at 90 degrees  ok so by Pythagoras  r = sqrt ( x ^2 + y^2) substitute for x and y  r = sqrt  (r^2 *x^2 cos^2 𝛳 + r^2 * y^2sin^2𝛳) cos^2 𝛳 + ^sin^2𝛳 = 1 r = r   ok what we have p

Learning rate

 So what is learning  rate and why it was introduced , sometimes it is called as step function ,why it is called the step function  it increases the step taken to reach from point a to b. alright u might be confused , let s go for an example    y = m* x  line equation x1 = x0 + 𝛿X  m1 = m0 +𝛿m  ok so what are these functions  will see one at a time  if y  = m * x and y = 3 and x =2 the value of m would be  m = y /x = 3 /2 =1.5 which is the slope of the equation . these might a real time example think of x as input and y as output  x can be a gold purchased year y can be gold rate  the gold rate increases as the year increases  the examples are just numbers don't think too much for now  in this case i know x and y i am going to keep the 'm 'as  the parameter  to be found by iteration or techniques in machine learning ,ok let go    we will take this example  m1 = m0 +𝛿m   y = m * x  , x =2 , y = 3  the idea here is to find the optimized value or the value closer to  m = y

Linear regression with n variables - Introduction

 Consider a function to be a more than 1 variable  y = mx this is a one variable ,m  y = w1*x1 , w1 the variable y = w1*x1 +w2 *x2 , 2 variables w1 and w2  y = w 1 *x 1 + w 2 * x 2  -  -  - - - - - - w n * X n  n    - variables these w1 --- w n are also called as weights. since from the data we have x1 , x2 ,,,,,x n and corresponding y 's  the idea is to optimise the equation ,that means to find the optimised  values for w1 .....w n , so why is that  we cannot change  the values , data as inputs ,the only thing we can do is to vary the data x using a multiplier w. so the input changes. assume the equation to be  y = x1 + x2  x1 be the data from real life scenario the housing data ,like no of bedrooms x2 be the data as no of floors y - the price of the house with no of bedroom and no of floors  y = 3 + 2 = 5  but the reals price might be  $7 , so difference of the values 7 - 5 =2 so the loss is not zero . the basic technique here would be  y = 3 + 2 * 2 what i have done here is mult

Newton–Raphson - Failure of the method to converge - example

The method does not converge , it means the approximation could not found for certain equations like for ex  y = 2 * x +2 * z +1 - b here b is the value we need to find  2 * x +2 * z +1 =  b if we try the iteration method for this to find the value of x where the loss is zero the cycle tends to repeat the same value for x alternatively . finding dy/dx = 2 ,dy/dz =2  x1 = x 0 - y(x0, z0)/ f '(x) z1 = z 0 - y(x0, z0)/ f '(z)  assume values for x0 = 0.5 , z0 = 0.4 , b = 24 y(0) = 2 *(0.5) +2 *(0.4) +1 - 24  y0=-21.2 x1 = 0.5 - (-21.2 /2) x1 = 11.1 z1 = 0.4 -  ( -21.2 /2)  z1 = 11 to check for convergence i wil substitute the value of x1 and z1 in the equation  so that if it converges ,means  y =0 ,because iam expecting a value of 24 from the  equation, lets see y1 = 2(0.5) +2(11) +1 -24 y1= 21.2  the loss here is too large not even close to zero. ok we wil go for the next iteration and see x2 = x1 - y1/f '(x1) z2 = z1 - y1/f '(z1) x2 = 11.1 - (21.2/2) x2= 0.5 z2 = 11 - (2

Newton–Raphson method - Introduction

This page only deals with the algorithm theory for example of this theory  check out   newtons-raphson-method-for-arriving-at.html   Alright  the theory says for a function  y = x - a , the optimization of parameter ' x ' where y = 0 /close to zero a technique  used for optimization of parameters in machine learning problem,  consider a = 5 , x = 5 , y will be zero but this case is not straight forward  the value for x is obtained using Newton–Raphson method , which is via iteration says  x n+1 = x n - f(x)/f '(x)  f(x) = y = x- a f '(x) = partial derivative of x  = dy/dx = 1 consider 2 iterations here n =0 ,1 it starts with assuming a initial value for x which is n =0 x 0   , y 0 plotting the values in graph  from the graph u can see i have plotted the x0, y0  tangent line to the point cuts the x - axis at x1 where the y is '0' says loss is zero at iteration x1, u may now wonder the the value of X has moved from X0 to X1 minimize the value of X where the loss

Newton's Raphson method - for arriving at minimum using approximation

Alright the  Newton–Raphson method is used here to find the approximate value of some variable. looks weird right ,the idea is to try out some technique to find the  parameter values of a function so that the loss is minimum /close to '0' The method derives the equation as   x n = x n-1 - f(x) /f '(x) f(x) is some function  f '(x) is the derivative of x   here  i have tried some function of my own or u could see the wiki ref for example from below u can the 1st iteration value ,but to minimse the loss we need to go for further iterations.         lets see how the 2nd iteration performs we have arrived at the value of x at x 1 so the loss which is y is zero so what we have achieved with this u can try out some other equation  to get the approx value , still thinking  consider another equation x ^2 = a ,this is to find the root of a ,meaning x = sqrt( a) . so the idea here to to optimize some parameter now it is 'x'  could be other parameter 'm ' or &#

Loss Function - Mean Absolute Error

 Mean Absolute Error -  it is straight forward import numpy as np import tensorflow as tf y_true = np.float32([ 1 , 2 ] ) y_pred = np.float32([ 2 , 2 ] ) #loss = mean(abs(y_true - y_pred)) #np.abs will return the absolute values , nps.abs[-1] will return 1 loss_mae = tf.keras.losses.mean_absolute_error(y_true, y_pred) print (loss_mae.numpy()) print (np.mean(np. abs (y_true - y_pred)))   print and verify  0.5 0.5 or u can run the code in colab using the link https://github.com/naveez-alagarsamy/matplotlib/blob/main/mean_absolute_error.ipynb   reference - https://www.tensorflow.org/api_docs/python/tf/keras/metrics/mean_absolute_error

Loss function - Mean Square Error

Mean Square Error is nothing but loss = square(y_true - y_pred) loss = (y_true - y_pred)**2   both are same     import numpy as np import tensorflow as tf   y_true = np.float32([ 1 , 2 ] ) y_pred = np.float32([ 2 , 2 ] ) iam using numpy mean to calculate the average of values loss = np.mean(np.square(y_true - y_pred) ) on the other hand u can verify with tensorflow MeanSquaredError  function to calculate mse because in future we will use these for our machine  learning samples    mse = tf.keras.losses.MeanSquaredError() mse_loss_tf = mse(y_true, y_pred).numpy() print the values to verify print (loss) print (mse_loss_tf) 0.5 0.5 u can use the git link and run it in colab https://github.com/naveez-alagarsamy/matplotlib/blob/main/mean_square_error.ipynb   reference -  https://www.tensorflow.org/api_docs/python/tf/keras/metrics/mean_squared_error   

Linear Regression with one variable - Introduction

 It is not but making a some how clear relationship among variables the dependent and independent variables. talking in terms of maths the equation can be used meaningfully for something may be to determine /predict values from data. if y = m * x + b  the values for m , b can be anything but has to appropriate to predict y  so the loss which is  difference from existing to prediction is close to zero ~0 to start with we can say the one variable as -x  in some scenario m , b are called variables    the equation stated about is a line equation we have any equation  y = 2*x  y = x*x y = 2x +2x*x  so why the need of all these equations , it is all about playing data now a days in machine learning problems we create a data sets , lets consider as x  y to be a value of x the datas . y = datas  when we express the data as a function and plot in the graph we get the curves  take some random data x and plot x and y  x =1 , 2, 3 ,4  y = x  the equation we have formed here is whatever the value o

Partial Derivative - Chain Rule

what is the chain rule ? consider the function y = m * x +b  i am going to work on the function which we can use for our machine learning problems . consider a loss function h = [y - y_hat] ^2 which is also called mean square error y - dependent on x y_hat - predicted value  h = called loss    so why again two functions , ok h = [m * x +b - y_hat]^2  looks complicated right ,this is where we will use the  chain rule ,how then ? say partial derivative of m , b are [1]  dy /dm  = x , dy/db = 1 do a substitution for h  = u^2  , u = [y- y_hat] [2]  dh/du = 2u = 2 * (y - y_hat) du/dy = 1 so dh/dy  = dh/du * du /dy    --------[ chain rule] = 2 * (y - y_hat) *1 dh/dy = 2 * (y - y_hat) so what is partial derivative of  dh/dm and dh/db  then ?   chain rule again from 1 and 2  dh/dm = dh/dy * dy/dm [3] dh/dm =  2 * (y - y_hat) * x  dh/db = dh/dy * dy/db [4] dh/db   = 2 * (y - y_hat) * 1 values 3 and 4 are used in values corecction for m and b  new value of  m = m - dh/dm new value of b = b

Partial Derivative - Introduction

  So what is partial derivatives ,don't worry i am not going to take a boring maths class will go straight to the point 1.partial derivative are used in machine learning back propagation. so all of a sudden what is this back propagation  before that there must be a forward prop or forward pass , ok got it don't worry we will talk about a example 1.consider data points x =[1, 2, 3] some values for our understanding 2. consider a function y =x so the values of y will be [1, 2 , 3] 3.consider a function y= x^2 so the values of y will be [1, 4, 9]  4. In real life the data points could be different and the dependent function y could be like y =m*x +b or y = w1* x1 +w2  * x2 +b as u can see the data plotted and the line we are trying to fit could be either the green or blue line , so what are tryin to achieve with this  1. plot your data as X 2. have a function Y as dependant of X 3.use of function is to predict values of y for x since we have introduced variables like m , b , w1, w

Stochastic gradient descent with multiple variables

 The idea is to understand how u can create a sample for stochastic gradient descent using python , numpy and some basic maths. so what is gradient descent  u might be bored with the term and it is always boring with visualization here i will run through a sample using python and plot that in mat plot lib for visual. matplot lib is another library u can install on the go and it is like a simple x,y graph that we used in our school days.don't worry this is pretty simple. 1.install python - https://www.python.org/downloads/ if you are using windows it will be a exe run that . once installed type in command line   since the function is  y = w1* x1 +w2 * x2 +b we will use the know  mean square error loss function  mae  = (y - y_hat)**2 next step would be to calculate partial derivatives for w1 , w2 , b  with respect to y dy/dw1 = x1 dy/dw2 = x2  dy/db = 1 h = (y - y_hat) ** 2  h = u **2 , u = y - y_hat dh/du = 2u = 2 (y - y_hat) du/dy = 1 by chain rule dh/dy = dh/du * du/dy dh/dy = 2

Tutorial - numpy for changing color of an image at pixel level.

import numpy as np import matplotlib.pyplot as plt import cv2 as cv #this program conversts color of certain sections of image #each pixel is checked for the r g b and modified using a numpy array image = cv.imread('rg.png') #print(image.shape) #print(np.asarray(image)) width , height , channels = image.shape #print(width) #print(height) #print(channels) plt.imshow(image) plt.show() #print(image[29][29]) zero_array = np.zeros((2, 3, 3)) for pixeli in range(width):     #print(pixeli)     for pixelj in range(height):        # print(pixeli,pixelj)         #print(image[pixeli][pixelj])         pixelvalue = image[pixeli][pixelj]         #print(pixelvalue)         if pixelvalue[0] == 76 and pixelvalue[1] == 177 and pixelvalue[2] == 34:             #print('mach')             r = 0             g = 0             b = 0             image[pixeli][pixelj] = [r, g, b]   plt.imshow(image) plt.show() The pixel level color are changed as y