Skip to main content

Posts

Hough Transform - Introduction

 So what is hough lines or the transform ,  Hough Line Transform is a transform used to detect straight lines. consider a image where u need to find the lines straight lines ,could be horizontal , vertical , diagonal lines in the image u can do that via the hough transform   before that there is something called   Cartesian coordinate system: Parameters: ( m , b ) . Polar coordinate system: Parameters: ( r , θ ) ( r , θ )   if the line equation is y = m * x +b  in the cartesian  x, y  in the polar it is  x = r cos(𝛳) y = r sin(𝛳) . we will see about this in detail  from the diagram u can see the inclined line and 𝛳 are related as x = r cos(𝛳) , the adjacent side of 𝛳 y = r sin(𝛳), the opposite side of 𝛳  ' r ' the distance from the origin the reference point to the line which hits the line at 90 degrees  ok so by Pythagoras  r = sqrt ( x ^2 + y^2) substitute for x and y  r = sqrt  (r^2 *x^2 cos^2 𝛳 + r^2 * y^2sin^2𝛳) cos^2 𝛳 + ^sin^2𝛳 = 1 r = r   ok what we have p

Learning rate

 So what is learning  rate and why it was introduced , sometimes it is called as step function ,why it is called the step function  it increases the step taken to reach from point a to b. alright u might be confused , let s go for an example    y = m* x  line equation x1 = x0 + 𝛿X  m1 = m0 +𝛿m  ok so what are these functions  will see one at a time  if y  = m * x and y = 3 and x =2 the value of m would be  m = y /x = 3 /2 =1.5 which is the slope of the equation . these might a real time example think of x as input and y as output  x can be a gold purchased year y can be gold rate  the gold rate increases as the year increases  the examples are just numbers don't think too much for now  in this case i know x and y i am going to keep the 'm 'as  the parameter  to be found by iteration or techniques in machine learning ,ok let go    we will take this example  m1 = m0 +𝛿m   y = m * x  , x =2 , y = 3  the idea here is to find the optimized value or the value closer to  m = y

Linear regression with n variables - Introduction

 Consider a function to be a more than 1 variable  y = mx this is a one variable ,m  y = w1*x1 , w1 the variable y = w1*x1 +w2 *x2 , 2 variables w1 and w2  y = w 1 *x 1 + w 2 * x 2  -  -  - - - - - - w n * X n  n    - variables these w1 --- w n are also called as weights. since from the data we have x1 , x2 ,,,,,x n and corresponding y 's  the idea is to optimise the equation ,that means to find the optimised  values for w1 .....w n , so why is that  we cannot change  the values , data as inputs ,the only thing we can do is to vary the data x using a multiplier w. so the input changes. assume the equation to be  y = x1 + x2  x1 be the data from real life scenario the housing data ,like no of bedrooms x2 be the data as no of floors y - the price of the house with no of bedroom and no of floors  y = 3 + 2 = 5  but the reals price might be  $7 , so difference of the values 7 - 5 =2 so the loss is not zero . the basic technique here would be  y = 3 + 2 * 2 what i have done here is mult

Newton–Raphson - Failure of the method to converge - example

The method does not converge , it means the approximation could not found for certain equations like for ex  y = 2 * x +2 * z +1 - b here b is the value we need to find  2 * x +2 * z +1 =  b if we try the iteration method for this to find the value of x where the loss is zero the cycle tends to repeat the same value for x alternatively . finding dy/dx = 2 ,dy/dz =2  x1 = x 0 - y(x0, z0)/ f '(x) z1 = z 0 - y(x0, z0)/ f '(z)  assume values for x0 = 0.5 , z0 = 0.4 , b = 24 y(0) = 2 *(0.5) +2 *(0.4) +1 - 24  y0=-21.2 x1 = 0.5 - (-21.2 /2) x1 = 11.1 z1 = 0.4 -  ( -21.2 /2)  z1 = 11 to check for convergence i wil substitute the value of x1 and z1 in the equation  so that if it converges ,means  y =0 ,because iam expecting a value of 24 from the  equation, lets see y1 = 2(0.5) +2(11) +1 -24 y1= 21.2  the loss here is too large not even close to zero. ok we wil go for the next iteration and see x2 = x1 - y1/f '(x1) z2 = z1 - y1/f '(z1) x2 = 11.1 - (21.2/2) x2= 0.5 z2 = 11 - (2

Newton–Raphson method - Introduction

This page only deals with the algorithm theory for example of this theory  check out   newtons-raphson-method-for-arriving-at.html   Alright  the theory says for a function  y = x - a , the optimization of parameter ' x ' where y = 0 /close to zero a technique  used for optimization of parameters in machine learning problem,  consider a = 5 , x = 5 , y will be zero but this case is not straight forward  the value for x is obtained using Newton–Raphson method , which is via iteration says  x n+1 = x n - f(x)/f '(x)  f(x) = y = x- a f '(x) = partial derivative of x  = dy/dx = 1 consider 2 iterations here n =0 ,1 it starts with assuming a initial value for x which is n =0 x 0   , y 0 plotting the values in graph  from the graph u can see i have plotted the x0, y0  tangent line to the point cuts the x - axis at x1 where the y is '0' says loss is zero at iteration x1, u may now wonder the the value of X has moved from X0 to X1 minimize the value of X where the loss

Newton's Raphson method - for arriving at minimum using approximation

Alright the  Newton–Raphson method is used here to find the approximate value of some variable. looks weird right ,the idea is to try out some technique to find the  parameter values of a function so that the loss is minimum /close to '0' The method derives the equation as   x n = x n-1 - f(x) /f '(x) f(x) is some function  f '(x) is the derivative of x   here  i have tried some function of my own or u could see the wiki ref for example from below u can the 1st iteration value ,but to minimse the loss we need to go for further iterations.         lets see how the 2nd iteration performs we have arrived at the value of x at x 1 so the loss which is y is zero so what we have achieved with this u can try out some other equation  to get the approx value , still thinking  consider another equation x ^2 = a ,this is to find the root of a ,meaning x = sqrt( a) . so the idea here to to optimize some parameter now it is 'x'  could be other parameter 'm ' or &#

Loss Function - Mean Absolute Error

 Mean Absolute Error -  it is straight forward import numpy as np import tensorflow as tf y_true = np.float32([ 1 , 2 ] ) y_pred = np.float32([ 2 , 2 ] ) #loss = mean(abs(y_true - y_pred)) #np.abs will return the absolute values , nps.abs[-1] will return 1 loss_mae = tf.keras.losses.mean_absolute_error(y_true, y_pred) print (loss_mae.numpy()) print (np.mean(np. abs (y_true - y_pred)))   print and verify  0.5 0.5 or u can run the code in colab using the link https://github.com/naveez-alagarsamy/matplotlib/blob/main/mean_absolute_error.ipynb   reference - https://www.tensorflow.org/api_docs/python/tf/keras/metrics/mean_absolute_error