Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Linear Regression #852

Open
Kubha99 opened this issue Mar 6, 2022 · 5 comments
Open

Add Linear Regression #852

Kubha99 opened this issue Mar 6, 2022 · 5 comments

Comments

@Kubha99
Copy link
Contributor

Kubha99 commented Mar 6, 2022

No description provided.

@Kubha99 Kubha99 changed the title Add Linear Regression Add ´Linear Regression´ Mar 6, 2022
@Kubha99 Kubha99 changed the title Add ´Linear Regression´ Add Linear Regression Mar 6, 2022
@Kubha99
Copy link
Contributor Author

Kubha99 commented Mar 6, 2022

I saw that Linear Regression algorithm was missing under ML module. Will try to implement it.

@josezy13
Copy link

josezy13 commented Sep 1, 2022

close

@josezy13
Copy link

josezy13 commented Sep 6, 2022

y

@rmsvivek
Copy link

LR algorith is missing

@jaki729
Copy link

jaki729 commented Nov 26, 2023

This code represents a basic implementation of simple linear regression using gradient descent. Replace X and y with your dataset. The code iterates through multiple epochs, calculating predictions, errors, gradients, and updating weights to minimize the error.

# Simple Linear Regression using Gradient Descent

# Sample data
X = np.array([1, 2, 3, 4, 5])  # Input feature
y = np.array([3, 4, 2, 5, 6])  # Target variable

# Hyperparameters
learning_rate = 0.01
epochs = 1000

# Initialize slope (weight) and intercept (bias)
m = 0  # Initial slope
c = 0  # Initial intercept

n = float(len(X))  # Number of elements in X

# Gradient Descent
for i in range(epochs):
    # Predictions
    y_pred = m * X + c

    # Error calculation
    error = np.mean((y_pred - y)**2)

    # Gradient calculation
    gradient_m = (-2/n) * np.sum(X * (y - y_pred))
    gradient_c = (-2/n) * np.sum(y - y_pred)

    # Update weights
    m -= learning_rate * gradient_m
    c -= learning_rate * gradient_c

# Final slope and intercept
print("Slope:", m)
print("Intercept:", c)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants