This course studies the mathematical foundations of machine learning, and focuses on understanding the trade-offs between statistical accuracy, scalability, and computation efficiency of distributed machine learning and optimization algorithms. Topics include empirical risk, convexity in learning, convergence analysis of gradient descent algorithm, stochastic gradient descent, neural networks, and reinforcement learning.
4
UnitsLetter
Grading1, 2, 3
PasstimeNone
Level LimitEngineering
CollegeI really enjoyed this class, and this professor was super chill. Lectures were nicely paced, midterm (40%) was chill, final (45%) was chill besides one tough question. TAs were super helpful at office hours; would highly recommend going. This class was way better than 130A for me, and I found it super interesting
Professor was super nice. Final and midterm collectively 85% of grade, with not many questions on either. One question (out of 4) on the final was extremely difficult, so your entire chance of getting an A revolved around this one question.
One of my favourite professors, he made difficult material seem much easier, he's very nice and relaxed, more like a friend than a professor. Homeworks and Midterm were very fair but one of the questions on the final was pretty tough and almost single handedly decided who's getting an A and who's not. I really liked the class all around
It was so bad. Terrible.
he's really nice and explains the concepts clearly and slowly. hws were helpful, exams were reasonable. he focused on the concepts instead of the computations, which was really nice. overall a big fan of pedarsani. def take 130b with him over any other prof