Added on: Sunday, 05 December, 2021 | Updated on: Thursday, 02 March, 2023
I’ve been programming for a fairly long time, and tinkering with computers for much longer than that. In that amount of time, I’d say I’ve tried my hand at a lot of different aspects of computing.
This post is really just a look back at some of my projects and how they originated.
Side Note: This particular article will likely not be maintained much due to the fact that I do not want to talk about future projects in this one post as they tend to be complicated enough to demand an article of their own. Consider this a non-exhaustive list of projects up until 2021 and a status update up to August 2022.
This is probably the first real program I ever wrote. I say real because up until that point, I’d been building something out of a tutorial, and that this one was something that I myself thought up of and implemented.
It’s definitely not good code; I’ve somehow written functions called
break3(), used some global variables, and also printed a somewhat rude message when the user refused to take a break.
However, it was:
- A working program
- A starting point
I remember working really hard to build this program and to get it working, and almost giving up on the program, but I just kept going. It felt so rewarding to get this first program up and running, even if I had to write hacky, repetitive code, and even if it didn’t really do all that much. All that mattered was that this program was something I willed into creation, for better or for worse, and this prospect of creation excited me to no end.
The moment you open the GitHub repository for this program, you’ll notice the much better README.md file (screenshots!)
This project just creates a list in an interactive way in the terminal. It makes liberal use of ASCII escape codes to clear the screen (doesn’t work on Windows) as well as the Python
input() function. That’s it. There’s no ncurses, and a rudimentary edit functionality was attempted to be added, but its been commented out so I think I never really finished it..
It taught me a lot about file handling and how to organize a project. You’ll notice a gargantuan leap in code quality from Eye Time Tracker, from the naming structure (bye bye
break25()) to the commenting frequency.
The main reason why I wanted to write time2lib, which pretty much gives a couple functions for you to integrate the behavior of Eye Time Tracker into your own programs, was to improve upon Eye Time Tracker. I’d opened the code up on a whim, saw the quality of the code and promptly got to work rewriting it.
Also the name was pretty funny, it’s a play on the phrase “time to live”, and the “2” signifies this is the second time I was writing something like this, and I just had to put that in somewhere.
The README.md is again pretty polished and even has a roadmap of things to do, but it’s somewhat lacking with the installation instructions for Windows.
Scraper was a pretty fun project for me. I’ve always marvelled at the Internet and the free flow of information it allows, and I wanted to use this information in new, cool and wacky ways, kind of like how Tom Scott describes in the Web 2.0 days in this YouTube video
I’d actually made a simple program that would go to the Wikipedia English home page, fetch details about the featured article of the day and make a hangman game out of it. I called it wikiquiz, and I haven’t released it on GitHub*, but the scraping part of that code was released (after a couple of modifications) as scraper, a very basic set of tools to scrape webpages for information.
Keep in mind I never wanted to match or compete with the likes of Beautiful Soup with this product, but rather:
Get an idea of how this stuff works (granted my implementation isn’t very efficient or extensible but I’d get the gist of things)
Have a very simple installation procedure; plop the scraper.py file in your project and import it. That’s it. It uses urllib, which is in the standard library for Python so there’s basically no maintainence or updates needed from my end, which makes it a pretty easy to use option if you have a static page you need to get your information from.
The scraper isn’t very good, admittedly, but I still think it’s a somewhat clever way of approaching the problem of parsing HTML, if you don’t care too much about speed.
* Wikipedia already has an API, and I don’t really want to encourage lots of unnecessary traffic to their website. Also I did try to run it, and it doesn’t work anymore. One of the many pitfalls of scraping web pages, I’m afraid.
If you’re particularly observant, you’ll have noticed that there’s a lot of gaps in me releasing these projects on GitHub. The truth is that I’ve created (more accurately tried to create) lots of things, but I only released something on GitHub if it seemed to be a little original and was functional as well. If it had too many edge cases to fix or was far too ambitious for the time, I never published it.
During 2020, I also started to work on a game for my 12th class computer science class with two of my friends called panzer. There’s little to no git history for this, but that’s because I was unaware that GitHub had made private repositories free for non-paying users in 2020. We instead used the very low-tech option of just sending each other the changed source code files we’d worked on and pasted them into our working directories. At the end of each coding session, I’d create a zip folder and keep that as a backup of our work in case we messed up tomorrow.
The game itself is pretty cool, in my opinion. You control a tank whose job is to protect the base (in the center of the map) from a variety of bots which will spawn from the corners of the screen. The tank you control is basically invincible but the base is not, and the longer the bots are touching the base, the more health drains away from it. There is also a rage mode which is unlocked on getting 100 points or more and some powerups too. For how long it took for us to create it (maybe 1-2 hours of coding for ~4 days on average for 1.5-2 months, the last 2 weeks of which were really just polishing up the game a bit and creating the necessary documentation etc), I think it turned out pretty great. We used MySQL for keeping tracks of scores and generating the high scores table, which was really only there as it was a requirement in the school project guidelines.
After the school year ended, we were free to publish the code on GitHub, and so my friends created accounts and I had them upload a part of the project so that all three of our names would show up as contributors on GitHub.
Between this, I started to look into creating this blog as a way to spread information about a variety of things, as well as having a space to show off my projects and talk about them at length.
Well, you’re reading it! This page was built using SourceHut Pages I wrote the HTML and CSS myself since I wanted to learn more about web technologies, but I did also write a tool to help me create templates and apply them to any blogpost I write in the future with not too much work.
I migrated to SourceHut from GitHub, mostly for ideological reasons. I still maintain the GitHub account as it is necessary to contribute to many open source projects, but my own projects will be hosted on SourceHut only. I eventually plan to migrate this entire website to a custom domain, but I’ll still stick with hosting on SourceHut Pages unless I get enough traffic to warrant setting up my own web server.
blog-helper is intended to help create blogposts more easily for me. There’s a config file written in JSON (but with fairly descriptive variable names) for me to enable and disable features as well as tweak how templates are added to the webpages. I’ve made it flexible enough to work with anyone’s webpages and it deliberately doesn’t do as much automation as other tools do (no one command publishing here!) so as to accomodate alternative workflows. Someone could add spell checks done before running blog-helper, while someone else could also compress all their webpages generated from here in order to save bandwidth for both themselves and their users, and so on and so forth.
The code itself is very basic, and in fact scraper’s techniques of using Python’s inbuilt string parsing methods comes in very handy for a lot of stuff. The RSS integration does work, although I haven’t really enabled it for my website since I want to do that when I get a custom domain.
This project also created a sort of spin off that one could use in their own projects as a drop-in module if they wanted-
config-manager is in charge of JSON. That’s kind of it. It’ll save to a dictionary and it’ll load the config file into a dictionary from where you’re free to do whatever you desire.
Essentially, it is a basic abstraction over JSON to ultimately generate a dictionary/key-value pair in Python.
It’s somewhat mind-boggling to me to see how far I’ve managed to come from the beginning, but I have my sights set sorely on the future.
The first one I contributed to was
tldr client as well, there’s just some specifications you have to follow.
What I really like about
tldr is that they have a really good review process for everything; from the client code to the pages they have. They’re also very systematic and organized, and their project is genuinely one of the most useful thing I’ve used.
I can’t count the number of times I’ve searched up on how to do something with git (for example), and have had to wade through technical docs I can’t understand half the words of without going down seven rabbit holes, or StackOverflow answers for related things, or random websites full of spam and ads that contain very little actual info.
The man pages are really good, but you have to know what you’re looking for.
tldr occupies a nice middle ground, and one of the main advantages they have over similar projects is the excessive amount of standardization they follow.
If you’re looking to contribute to them,
tldr could benefit greatly from translating existing pages into other languages, as well as new pages for new tools that have cropped up.
In fact, the usefulness of
tldr is such that I sat down and wrote a basic search function for the Python client (the one I use) so that I could find stuff easily. This serves another use case that I can eliminate from my web browser, reducing context switching even more. For example, if I forgot how to change branches in git, I could:
1. Open my web browser and search "change branches git"
2. Fire up a terminal and type:
tldr git --search "change branch"
Update: While I couldn’t participate in GSoC 2022 due to time constraints, I did end up learing more C and developed two projects in it: cpanzer and chip8-emulator. I also wrote an article on the latter, which you can checkout here
Update: I did reveal what I was up to here.
I’m getting back into some competitive programming, mostly as practice to improve my problem-solving skills, but also to learn some more DSA.
In order to learn more about the nitty-gritty of computing, I’m planning on writing an emulator for CHIP-8. I’ve gathered the resources and am planning to write it in C. It’ll be a fun and interesting project, and I’ll be sure to write more about it here.
Update: I did write that article. It’s over here.
That’s all for today. Bye now!
This website was made using Markdown, Pandoc, and a custom program to automatically add headers and footers (including this one) to any document that’s published here.
Copyright © 2023 Saksham Mittal. All rights reserved. Unless otherwise stated, all content on this website is licensed under the CC BY-SA 4.0 International License