Experience
Amazon SDE (June 2020 - January 2023)
I worked at Amazon as an SDE I in the Forecasting Deep Learning team.
My team owned quite a few forecasting models which ingest large amounts of raw data to output forecast bundles for our internal customers to consume. We worked with multiple pipelines for various models, which kicked off multiple workflows every day that generated these forecasts. Our customers varied from the data science side of our team who analyzed the forecasts to make sure nothing is amiss, to more downstream teams that took in our forecasts and transform and compile them into more consumable formats for different stores to utilize. Our team worked on a sprint system, and I generally switched between different projects as sprints start and end. I did a variety of different tasks depending on what was needed, but I’ll list some of the more notable/frequent ones.
My most recent project involved designing and implementing a tech stack consolidation between two of our more prominent models. The previous year, our team had taken ownership of quite a few core forecasting models that other teams had owned during a re-org, and one of the larger models we inherited had a fairly large and complex code base that was difficult to deep dive or upkeep. The goal of my project was to both re-organize how the model workflow worked to become more efficient, as well as reformat it in a way that it mimicked the structure of our main pre-existing model before the re-org. The design revolved around deciding how we wanted to allocate the different processes in the model, and figuring out where we wanted to utilize the computing resources of EMR clusters and how we wanted to implemeent external APIs to simplify the process.
Some of my work consisted of modifying or creating new datasets depending on what features our customers wanted. Our workflows essentially flowed the input data through these various datasets to filter and calculate information needed for our models to run properly. These datasets need to be able to process large amounts of data quickly and efficiently in order for our workflows to generate forecasts in a timely manner. Modifying the datasets is usually made for the purpose of either removing unnecessary steps in the workflow or changing the way its run so that the workflow runs more efficiently.
Another main task involved thorough maintenance and updates to our pipelines and workflows. As other teams update certain inputs or schemas for their own improvements, we need to backtest and forward test many components of our own workflows in order to keep up to date and make sure the forecast accuracy doesn’t go down in the process. These workflows utilize a variety of different amazon services including (but not limited to) AWS, EMR, S3, Sagemaker, and Athena. I’ve familiarized myself with quite a few amazon services over the course of my time here.
Leica Geosystems Internship (June - August 2019)
Leica Geosystems focuses its efforts on producing products and systems for geographical measurement and surveying.
My main goal as a software development intern was to create a C# program that essentially demo-ed most of the features that Leica’s software do. It took in Cyclone, Jetstream, and LGS files (all 3D point cloud manipulating software) and allowed for manipulation of the point cloud. I was to base all of the C# features on a similar C++ program that was already implemented. The goal was for my program to be used by other C# developers as a basis to see how Leica’s products worked. Throughout working on this project, I had spent ample time learning debugging techniques for both C# and C++ in Visual Studios, as well as learned how to share information to and from C++ native code to the managed C# code. I worked with callback functions and many kinds of typecasting, and learned how to work with much larger projects than what I was used to (over tens of thousands of lines of code, and many different classes and files I had to understand in order to implement the program). The general outline of the program was there, such as the base classes of 3D lines, quaternions, cylinders, etc. but most of the tools to utilize them had to be implemented by me. I also gained experience in documentation as well, documenting the api of the C# program for future developers to use.
JD.com Internship (June - September 2018)
JD.com is China’s largest online retailer and its biggest overall retailer. I worked at a US branch of the company in Mountain View, CA. I was an intern in the AI NLP branch and worked in the backend for this project.
My main goal as an AI NLP intern was to get their English deep parsing system’s accuracy to rise from ~30% to ~70% over the course of the summer. In this internship, I was able to have first-hand experience in seeing how the entire parsing system worked, including the input, tokenization, parsing, their server-test system, to the end product. Most of the backend was all coded in Python, but I also needed to learn and utilize their own custom language when modifying or creating rules to grammatically structure the English sentences.
I went through a cycle of rule modification/creation, unit tests, and unit regression tests, as well as updating and maintaining the large English lexicons we had in the database. I was occasionally assigned to add new features to the English parsing system such as ways to incorporate compound words and a stemming feature to catch words that may have been missing from the lexicon in our database.
Overall, this project allowed me to gain more experience in picking up new concepts faster in terms of learning their custom language as well as manipulating large databases in the form of the English lexicons. I was also able to understand how the entirety of a system works from input to output for English deep parsing. By the end of the internship, I was able to raise the accuracy to pass the 70% expectation and make the system around 85% accurate.
Truckxi Internship (June - August 2017)
I took on quite a few roles throughout my time at Truckxi. I switched between being a video editor, a bug tester, a bug fixer, and also generally ghosted various programmers.
Truckxi had quite a few projects going on, but the main project that I interacted with helped with was a Supply Chain Management service that helped warehouses better keep track of shipments and goods they needed or have. The service needed to be able to connect various databases and servers together in order to stay up-to-date with the warehouse and its employees. Working on the SCM helped me develop some skills in regards to working with servers as well as become more familiar with the division of programming jobs between my co-workers.
In regards to programming, I mainly helped with making sure Purchase Orders and Sales Orders successfully rendered properly and updated the inventory. This involved making sure that the user can filter through the items they are trying to find and making sure that cancelling orders or updating orders will correctly update the server and thus the database.
I spent most of my programming time helping to fix bugs in the back-end and front-end. While working on the back-end, I used the Django framework to code with Python. For the front-end I used a combination of React and Redux to code with Javascript.
Make School Summer Program (July - August 2016)
During the eight week summer program of Make School, I, along with around twenty others in my category, worked on creating a mobile app and publishing it before the program ended. In my case, I created an game application called Futility. Futility is essentially an arcade survival-style game where the user tries to score as many points as possible. The premise of the game revolves around trying to match colors between different objects in order to keep a ball from reaching the floor.
I used both SpriteBuilder and Swift to create this application. SpriteBuilder helped me import graphics and easily create the visual placements of all my resources. I used Swift to code in the logic and details of the objects. Some technical challenges I met were focused on making the physics of the game fluid and natural.
Throughout the course of this program, I collaborated with my coding neighbors in a variety of ways. For the most part, we helped each other bug-test each other’s games and criticized each others’ UX designs to make the mobile app feel simplistic and easy-to-use. We also initially worked together to accumulate and combine ideas so all of us would be able to choose a concept we saw potential in.
Project in Databases and Web Applications (School Project)
This project class’ main goal was to create a website that allowed users to search for and purchase movies online. Throughout the quarter I have used MySQL, AWS instances, Java, JavaScript, HTML, and CSS to accomplish the tasks set forth.
HTML/CSS and JavaScript made up the front-end and communicated with the back-end (consisting of Java Servlets) through JSON objects. We also experimented with ReCaptcha and password encryption for security measures. In terms of search functionalities we experimented with various search query matches in MySQL as well as fulltext searches and fuzzy searching. To allow for scalability we also used load balancing to make sure many users wouldn’t interfere with each other in terms of accessing the database and utilized Jmeter to get statistics on which methods were more efficient. In addition to the online website, we also used Android Studios to create a simulated Android application that ran a simplified version of the entire project.
Overall this class was very engaging and gave me a very useful experience of creating a project from scratch, as well as taking into account not only the security but also isolation of customer transactions to create a smooth and safe user experience.