Context
An embedded-systems coursework project where most students simulated circuits and submitted plots. I wanted to combine it with the ML elective I was taking that semester.
Voltage prediction from amplifier circuit data.
An embedded-systems coursework project where most students simulated circuits and submitted plots. I wanted to combine it with the ML elective I was taking that semester.
Predict the output voltage of a differential amplifier circuit from its design parameters (bias current, transistor sizing, supply voltages) using a regression model trained on simulator data, instead of solving the analytic equations every time.
Generated a few thousand parameter combinations in Micro-Cap, captured the simulated output voltages, then trained a regression model in Jupyter against that dataset.
Mean absolute error of ~3% on the held-out set. Submitted as an unconventional take on the brief and got marked accordingly.
The dataset was too clean — I generated it, after all. A real-world version would need to model component tolerances and temperature drift, both of which the simulator-only data set ignored.