Various Projects

2021-10-22 - Networks and Distributed Systems
#Python

These are an assortment of projects I created for my Computer Systems course at Northeastern University. The majority of these projects were done in a pair programming team alongside Sebastian and Zhi Cheng.

FTP Server

2021-10-01

The goal of this project was to develop a client for FTPS to securely transfer files to and from a FTPS server. The client provided support for ls, mkdir, rm, rmdir, cp, and mv commands.

BGP Router

2021-10-22

For this project, we implemented a simple BGP router to route messages to and from a collection of networks. In passing a package from one router to another, we ensured that the best path was selected based on localpref, self origin, ASPath length, and then IP in that order. Additionally, we had to ensure that peer to peer, peer to prov, and prov to peer routes were pruned.

Finally, we also implemented path coalescing. This was the most challenging part as we had to support both aggregation and disaggregation; we kept a copy of non-aggregated paths which we would update at the same time so that if a disaggregation call was made, we had a backup copy.

Reliable Transport Protocol

2021-11-05

We implemented two programs, 3700send and 3700recv, to send and receive packets. Our 3700send program would start off by sending a packet and then check to see the full message is sent. Once we received an ACK from 3700recv, the packet would be checked off.

Our 3700recv program would continuously listen for packets while searching for an EOF flag, at which point it would send an ACK back to our sender program.

Web Crawler

2021-11-19

The goal of this project was to implement a web crawler to retrieve a set of flags from a fairly large (fake) social media platform. The web crawler had to recursively crawl through pages while keeping track of already crawled pages. This was done by keeping track uncrawled URLs in a queue and also keeping track of crawled URLs in a separate array.

Starting with just '/', we would check for secret flags before adding every URL on a page to the frontier. For every URL crawled, the URL was added to the crawled array. Additionally, HTTP responses given by each page so that if a 500 response was given, we could re-crawl the page.

While we did not implement it for this project, I think it is appropriate to mention that in future web crawlers, to add parallelism to our algorithm. We were fairly lucky in our average run time for our web crawler was 3-6 minutes, however, in order for greater efficiency, parallelism would have let us rely not as heavily on luck.

Distributed Key Value Database

2021-12-17

For this project, we implemented a simplified version of the Raft consensus algorithm for managing logs. To ensure that our database was accurate, we had to ensure that we could handle packet drops, follower failures, and leader failures. The election process also had to be implemented in a way to ensure failures were handled properly.