diff --git a/src/blog.filler.html b/src/blog.filler.html index d71d576..f8e5b73 100644 --- a/src/blog.filler.html +++ b/src/blog.filler.html @@ -2,6 +2,8 @@
A collection of my thoughts, some of them may be interesting
+[ Side Project Log 8/15/23 ] - August 15th, 2023
+[ Side Project Log 8/08/23 ] - August 8th, 2023
[ Side Project Log 7/12/23 ] - July 12th, 2023
[ Side Project Log 4/29/23 ] - April 29th, 2023
[ Side Project Log 3/27/23 ] - March 27th, 2023
diff --git a/src/blogs/side-project-8-15-23.filler.html b/src/blogs/side-project-8-15-23.filler.html new file mode 100644 index 0000000..f19d689 --- /dev/null +++ b/src/blogs/side-project-8-15-23.filler.html @@ -0,0 +1,56 @@ +This side project log covers work done from 8/8/2023 - 8/15/2023
+ ++ I added a frontend to Olney and added a feature where it can automatically keep track of your job applications + by monitoring your email. +
+ ++ The frontend was made with Svelte. I chose not to use any UI/CSS libraries as I wanted to keep the number of + dependencies low. This was another good opportunity to learn about Svelte. +
+ ++ This is the killer feature that I initially set out to build Olney for. This works by having the user forward their + E-Mail to an instance of Olney. To receive E-Mail, Olney uses Inbucket, a mailserver + easily hostable within Docker. It listens on a websocket for incoming mail. Whenever a new mail message is received, + Olney uses the OpenAI API to get a summary of the email in the following format: +
+ +
+{
+ isRecruiting: bool, // is the message about recruiting?
+ recruitingInfo: null | {
+ location: string, // Location in City, Providence/State, Country format
+ company: string, // Casual name of company e.g: Google, Cisco, Apple
+ position: string, // Name of job position
+ type: "assessment" | "interview" | "offer" | "rejection" | "applied" // What the message is discussing
+ dateTime: string, // DateTime communication rec'd OR DateTime that is being discussed (i.e. interview date confirmation)
+ name: string // Name of event, giving more detail to type
+ } // null if message is not about recruiting, fill with values if it is
+}
+
+
++ Olney then takes some details from this data, namely: company, position, and location and then uses the OpenAI API to generate + an embedding. We then query the closest match out of the job applications + in the database (with pgvector). Once we have the job application, we add + the event to the database, using the job application's id as a fkey. +
+ ++ Embeddings was chosen as the lookup method that way we don't have to worry about data being parsed out of the email being an exact + match for what the user inputted. This also allows the lookup to work even when certain things such as location are missing from the + email. +
+ ++ Olney should be open-sourced/released within the next week or two. +
+ +These projects had minimal/no work done on them: NWS, RingGold, SQUIRREL
diff --git a/src/blogs/side-project-8-8-23.filler copy.html b/src/blogs/side-project-8-8-23.filler copy.html new file mode 100644 index 0000000..b5cb8a7 --- /dev/null +++ b/src/blogs/side-project-8-8-23.filler copy.html @@ -0,0 +1,65 @@ +This side project log covers work done from 7/12/2023 - 8/8/2023
+ ++ SQUIRREL has been updated to work with INSERT INTO and SELECT queries. I also refactored much of the codebase to do error handling more elegantly and to make the parser + more extensible. Here's a screenshot of table creation, data insertion, and data selection: +
+ ++ The biggest challenge of this part was working on the parser which has now been written three times. The approaches to the parsing were: +
+ +This was my initial and naive approach to the problem. I split the input string by its whitespace + and then queried values by referencing their indexes in the split string.
+This approach was cleaner than the first and led to a small parser, however it required an external dependency (which I'm + trying to minimize), and would make it hard to add additional features to commands later down the line.
+This solution was more verbose than the others, however it allows for easier development. This method works + by splitting the query string into tokens. Tokens are the smallest piece of data that a parser recognizes. SQUIRREL gets them by splitting + the input by delimiters and using the split list as tokens (excluding whitespace) SQUIRREL recognizes the following characters as delimiters: +
+ +
+ ' ', ',', ';', '(', ')'
+
+
+ + This means that the string "INSERT INTO test (id) VALUES (12);" would be parsed into the list: "INSERT", "INTO", "test", "(", "id", etc.. +
+ ++ Once we have our list of tokens, we iterate through them starting at a default state and perform a certain task for the given state, which + usually includes switching to another state. We do this until we reach the end state. +
+ ++ For example, with the above insert statement, we would start in the IntoKeyword state which would ensure that "INTO" is the current token. + We would then transition to the TableName state which would read the table name and store it in the ParsedCommand struct we're returning. We + would then move to the ColumnListBegin state which would look for an opening parenthesis, and switch the state to ColumnName. This process + continues with the other parts of the query until the Semicolon state is reached which checks that the statement ends with a semicolon, then + returns the ParsedCommand struct. +
++ Next steps for this are to add column selection to SELECT statements and add WHERE clauses to SELECT statements. +
+ ++ I added a feature to the Olney API which scans the pittcsc (now Simplify) summer internships Github repo + and parses the data into JSON format. I parsed the markdown file they have uisng regex which was relatively simple. There were some issues during development due to the + changing structure of the markdown file. These issues are being fixed on a rolling basis. I expect the changes to slowdown now that the transition from pittcsc to Simplify + is complete. You can access the JSON at olney.nickorlow.com/jobs. +
+ +These projects had minimal/no work done on them: NWS, RingGold
diff --git a/src/blogs/the-perfect-laptop.html b/src/blogs/the-perfect-laptop.html new file mode 100644 index 0000000..e69de29 diff --git a/src/extra.filler.html b/src/extra.filler.html index 86cb869..58ada38 100644 --- a/src/extra.filler.html +++ b/src/extra.filler.html @@ -36,21 +36,21 @@SQUIRREL stands for SQL Query Util-Izing Rust's Reliable and Efficient Logic. It is a SQL database that I am writing in Rust. Currently, it can parse CREATE TABLE commands, and works with the