PDF Conversions – Today’s Necessity

Being a college student, I often find myself at the print shop, carrying with me all kinds of documents to be printed – fee slips, academic transcripts, scanned copies of handwritten notes etc. While apps like CamScanner help in creating PDF copies of class notes, their functionality is limited to images that are directly captured by the app(s). Furthermore, the only file formats recognized at the print shop are JPEG, DOC(X) and PDF.

That’s why I have been scouring Google’s Play Store – in the pursuit of an app that can convert all my files to PDF copies, and into other formats, as and when required. One such app that fits the bill is PDF Convertor, developed by Cometdocs.

Before reviewing the app, it is imperative to expound a little on the history and advantages of the file format this app is built around – the PDF.

The emergence of PDF

Short for Portable Document Format, it has a legacy spanning more than two decades, with its first version released on 15 June 1993 by Adobe as a proprietary file format. What made its popularity soar to new heights was the ISO 32000-1, a Public Patent License, which allowed anyone to make, use, sell and distribute PDF-compliant implementations, without paying any royalties to Adobe.

What makes PDF so special today?

There are practical reasons for PDF being the de facto standard for electronic file types. Its capability to convert itself to print-ready graphics on paper, while preserving hyperlinks, images and text embedded within it makes it a versatile format. The cherry on top is its file size, which is much smaller than its JPEG counterpart, thanks to the data compression algorithms it uses.

Another factor is its OS independence, which allows it to look the same across all operating systems, making it more portable. Further, with recent versions of Android supporting PDF, its user base has expanded even more.

Having explained the PDF a little, it’s time to focus on the app itself.

The app’s interface (UI)

On opening PDF Convertor for the first time, the user is greeted with a blank screen, to which files can be added for conversion. There are a total of 24 conversion types to choose from, with 7 of them available as paid features. As of 25th November 2017, the full pack is worth 790 INR, while a la carte conversions are 250 INR each. I was especially interested in its capability to convert XPS to PDF, a hitherto locked feature for me. (XPS is the file format for the output plots generated by OrCAD PSpice, a software I use for circuit simulations, as part of my undergraduate course.)

Having unlocked the full pack, I set forth to use the app for converting the documents at my disposal.

Some of the in-app file conversions available.

There is also a batch conversion option, that allows you to generate a multi-page PDF, or vice versa, depending upon the conversion options at your disposal. I didn’t unlock this feature, since my conversions never exceed beyond a page or two.

An experience limited by Wi-Fi

Despite the well-laid design of the app with its easy to find menus, buttons and notifications, along with the slew of conversion options it has, I was unable to enjoy it to the fullest. The main reason for this is the Wi-Fi connection at my residence, where signal strength is pretty erratic. More often than not, when trying to convert any file, I get the following message:

Check your connection and try again.

Though I couldn’t carry out conversions at all times, it has been a satisfactory experience. All the conversions worked, whenever the Wi-Fi signal was strong enough.

My thoughts and suggestions

Having used different methods of PDF conversion for a while now, I have come to realize that every file conversion requires 3 steps –

  1. Upload files to a server
  2. Wait for the server to convert the files
  3. Download the converted files

The reason most conversion apps draw flak from the users is because they falter in step 1 itself. Not everyone has access to dedicated, high-speed Internet – especially users from developing and underdeveloped nations, making it a huge obstacle that developers need to overcome.

A related moot point is the use of browser web pages for the same task. For most users, who generally have to convert only a file or two, it seems more fitting to convert in this manner, rather than use a dedicated app for the same.

Keeping this in mind, PDF Convertor can incentivize its users into continued usage, by allowing them to create an offline queue for the files to be uploaded. As an analogy, we have YouTube Offline, a feature that allows users to create an offline queue of videos, which are downloaded as and when signal strength is sufficient.

Overall, I find this app an impressive one, and look forward to improvements in its UX.

External Links

  1. PDF Convertor on Google Play; the app:


  1. PDF, What is it FOR?; a video:
  1. PDF, Version 1.7 (ISO 32000-1:2008); a technical description:


  1. Document Management – Portable document format – Part 1: PDF 1.7; the 2008 documentation:


  1. Knowing When to Use Which File Format; an article:



Duolingo – an App Review

I recently acquired a brand-new phone – a Samsung Galaxy J7, as a replacement for my previous Nokia C6-01 smart-phone. The reason is pretty simple – I wasn’t able to install any apps on my Nokia phone, since its Symbian OS is not compatible with .apk files (the file extension for Android apps).

The first thing I did with my new phone was to install a few apps – Duolingo being one of them. Since I had come across multiple recommendations for this app, I decided to give it a try. Besides, I was looking for ways to improve my language proficiency in Urdu and Japanese.

Having used the app for a little while now, I feel that it deserves a review of its own – hence this article!

The interface – first impressions

One feature I really admire about Duolingo is its UI (user interface) – clean, simple and intuitive. When the app is opened the first time, the user is greeted with a plethora of options to choose from – German, Korean, English, Russian, and Japanese, to name a few. Depending upon the user’s language preferences, it offers these languages in different instruction modes.

Since my preferred language is English, I scrolled through the section for English speakers. To my dismay, I couldn’t find Urdu listed under any section, let alone the English section. However, it did list Japanese, which I decided to try out.

The UX (user experience)

Once a course is selected, the user is redirected to a test pertaining to the language. This is completed only after correctly answering a certain number of questions, following which some XP is earned, and a few ‘lingots’ – the currency used for purchases from the ‘Shop’.

Each ‘skill’, indicated by an egg icon, comprises of a number of tests, which must be completed in a similar fashion. Each test has multiple choice questions, translation tasks (audio and/or text), and word-match questions. The more questions the user answers correctly in a row, the more XP and lingots he or she earns.

While it may be used without registration, things get a little tricky when the user wishes to save his or her progress. In that case, app registration is required.

However, once registered, users are allowed to join a language club. These clubs have weekly leaderboards, which effectively gamify the app by creating an atmosphere of competitiveness.

Improving the app

If you’re looking for an app to learn languages in the form of a casual ‘game’, then Duolingo is the way to go. However, I wasn’t quite satisfied with the app, and probably had unrealistically high expectations from it.

In order to truly learn a language, one must not only read and listen to it, but also write it, and speak it. While I don’t mind jotting down words in a notebook, I don’t know whether my handwriting is legible or not. If there was a ‘capture’ feature in Duolingo to detect and identify text, it would be a big help in improving my Japanese handwriting.

When it comes to speaking the language, it is tough to comprehend the pronunciations correctly, even with audio read-outs of displayed words. For this, I suggest that IPA transcriptions be added to every word, and get the app to read out those transcriptions. This will go a long way in making the app’s experience more fulfilling.

Edit: After publishing this post, I came across TinyCards, which is another app developed by Duolingo. Its feature of allowing the creation of custom decks by users really impressed me.

In fact, I would go so far as to say that TinyCards is the perfect learning aid I have come across, for teachers and students alike.

Here is a deck of Urdu words I created, using this app:


Related links

  1. Duolingo on Google Play; the app:


  1. IPA transcriptions in Duolingo; a GitHub repo:


  1. Recognizing handwritten glyphs; a research paper:



Text Detection using Tesseract

Since the past couple of months, me and my colleague have been working on a research project.

The goal is simple – detect characters from a real-world image. However, the intermediate steps involved don’t make the task as straightforward as you might think!

Before discussing the technicalities of the project, it’s important to know what OCR is.

OCR – the heart of text detection

Short for Optical Character Recognition, it is used to identify glyphs – be it handwritten or printed. This way, all glyphs are detected and are separately assigned a character by the computer.

While OCR has gained traction in recent times, is not a new concept. In fact, it is this very technology that bank employees use to read cheques and bank statements.

For this project we chose Tesseract as our OCR engine. It has been developed by Google, and is what is used in their Google Keep app to convert images to text.

The project’s nitty-gritties               

We have limited our scope to printed text – specifically, street signs – and are attempting to convert the captured images to .txt files. This is how our code is intended to work:

If it works, then it would be possible to scale down the file size – a  very handy tool for storing names of places in smart-phones, which always come equipped with a camera these days. Ideally, such a task would be easy to accomplish, with perfect lighting, no perspective distortions or warping, and no background noise.

Reality, unsurprisingly, is quite the opposite. Hence, we are trying to process the images before feeding them to Tesseract, which is known to work best with binary (black and white) images.

According to our plan, we shall implement a three-step method:

  1. remove perspective distortion from the image
  2. binarize the image
  3. pass the image through Tesseract

Training the Tesseract engine

Before processing the images, the OCR engine needs to be ‘trained’ in order to work properly. For this reason, I downloaded jTessBoxEditor – a Java program for editing boxfiles (files generated by Tesseract when detecting glyphs). Since the project uses Ubuntu’s OS, I had to download and install Java Runtime Environment (JRE) to run jTessBoxEditor.

Since my portion of the project involves training the engine, I need to generate sample data for it. The engine needs to be fed samples of Times New Roman, Calibri, and Arial – the three fonts we came across in our images.

Our progress so far

Tesseract is still being trained, and the sample data is yet to be generated. After a while, realizing that these fonts would be available in my Windows installation, I copied the font files to Ubuntu, and successfully installed the fonts. One step down, several more to go!

On the image processing side, we are currently evaluating a Python implementation of ‘font and background colour independent text binarization’, a technique pioneered by T Kasar, J Kumar and A G Ramakrishnan.

I modified the code to work with python3, in order to avoid discrepancies between the various modules of our project. Here is the link:


A web forum also suggested that the input images be enlarged or shrunk, in order to make the text legible. This task requires ImageMagick, a software that uses a CLI (command line interface) for image manipulation. Therefore, I downloaded a bunch of grayscale text images (with the desired font, of course), and decided to convert all of them to PNG.

For some reason, I’m not able to do so, and have failed to convert any of them.

As an example, here is a sample command:

magick convert gray25.gif gray25.png

This is the error message I get in Terminal:

No command 'magick' found, did you mean:

 Command 'magic' from package 'magic' (universe)

magick: command not found

I’ve tried re-installing ImageMagick several times, but to no avail. I need to go through yet more web forums for a solution to this problem.

What’s the scope?

This is a question almost everyone asks whenever I discuss my project. Indeed, it doesn’t look very promising at first sight, due to the tedious nature of the steps involved.

However, its scope is quite vast – ranging from preservation of ancient texts and languages to transliteration and transliteration of public signage, and converting street signs to audio for the visually impaired. In fact, it may be used as a last resort for driverless vehicles to navigate an area when GPS fails.

We are only limited by our imaginations. Once merged with technology, they can be used to achieve miracles!

External Links

1.Font and Background Color Independent Text Binarization; a research paper:


2.Perspective rectification of document images using fuzzy set and morphological operations; a research paper:


3.jTessBoxEditor; a how-to guide:


4.AptGet/HowTo; a how-to guide:


Using Oracle’s VirtualBox – A Review

Of late, I have been tinkering around with Ubuntu. The reason? I needed to work on a Python project, and wasn’t making much headway into it.

Being a Windows user, I was finding it difficult to install the required Python modules for my project. This was especially exasperating with SciPy, a library that’s a prerequisite for almost all Python programs. Its latest distribution, unfortunately, is compatible only with Linux.

At the same time, I was apprehensive of even touching Unix, since it’s always spelt doom for my PC. Dual-booting Windows with any Linux or Ubuntu distro had caused, in the past, many a computer to crash – right in front of my eyes.

Hence, I had to overcome my apprehensions, tap into the hitherto alien Unix environment, and work on my project from there –  whether I enjoyed it or not.

While scrolling the internet for solutions, I stumbled upon VirtualBox, a VM(Virtual Machine) software by Oracle. Upon going through a few tutorials, I decided to give it a go.

What’s a Virtual Machine?

A virtual machine is a software that allows emulation of an OS (operating system). This way, the user can control one OS, while working within another OS. You may think of it as a case of one OS nested within another.

It’s amusing to think, “What if I run a virtual machine within my virtual installation? Is infinite nesting of OSes allowed?”

Ideally, such an experiment would be possible. In reality, hardware limitations would render it futile, since emulation saps up a significant portion of the host OS’s resources, such as RAM and memory. The hardware has to be divided between itself and the nested (also called guest) OS, a situation very similar to a dual-boot option.

As an explanation, I shall now use this infographic.


It’s clear that with more number of OSes, each nested OS shall have very little computational power at its disposal. In fact, OS 7 is a mere shadow of C64 (Commodore 64), which is itself an obsolete system by today’s hardware standards, since the latter requires at least 64 KB of RAM to operate.

A review of the installation

Here’s one feature universally appreciated about VirtualBox – it allows hassle-free toggling between the guest and host OS (in my case, Ubuntu) – all with a simple click of the mouse button.

This is especially useful to me, since I’m a staunch Windows user, and can’t stand Ubuntu’s interface for too long. Sure, Ubuntu allows for quick development of program code, but when it comes to good UI (user interface), I feel that its developers should borrow some design tips from Windows 8.1, which is the OS currently installed on my PC.

In fact, here’s what it looks like, along with VirtualBox:

Since my hard drive has around 500 GB memory, and 6 GB RAM, I’ve found it convenient to run a fully installed (virtual) version of Ubuntu, with 20 GB memory and 1 GB RAM allocated to it.

So far, it’s working well for me, and am quite satisfied with it!

Related links 

The VirtualBox website:


Installing Ubuntu within Windows using VirtualBox; a how-to guide:


Sharing files between VirtualBox and host; a how-to guide:


Microcontrolling an LCD & LED

Ever wondered how a display system works? Right from the traffic lights seen on busy roads, to the laptop or mobile display on which you are currently reading this blog, the tech involved may seem daunting at the surface, but its logic is relatively simple.

In fact, the simplicity might appeal to your curiosity, setting you on the path towards your very first electronics project, just like me. Well then, let’s dive in!

LCDs and LEDs – what are they?

Note: The LCD discussed here is a screen that displays characters, while the LED I have used is a simple diode, with two terminals. It’s different from LED displays, which is another category of flat-panel screens.

LCD is short for Liquid Crystal Display, and is most commonly seen in handheld calculators. It basically consists of a layer of liquid crystal sandwiched between two polarizing sheets. These sheets must be oriented at right angles to each other, otherwise the display won’t work. Its lowermost layer is either a mirror or an LED panel (if it’s backlit). To avoid confusion, I shall only discuss reflective layer LCDs here.

These displays are divided into cells, whose liquid crystal is individually controlled. In the OFF state, the liquid crystal is in a helical configuration, allowing light entering the top polarizer to pass through the second polarizer as well, resulting in a blank screen. Once it enters the ON state, the liquid begins to ‘untwist’, causing light to get blocked by the second polarizer, making the cell appear black in colour.

On the other hand, the LED (or Light Emitting Diode) is a two terminal device that is relatively easy to use, and may be plugged into a circuit, just like any other component. Being the latest advancement in indoor lighting solutions, it significantly improves over incandescent and fluorescent technologies, being more energy efficient than them.

Though this project uses both an LCD and an LED, I have laid more emphasis on the former’s functioning, as it requires more inputs for setting the cursor position, and ensuring that the output text is displayed in the way intended. The LED is simply a blinking bulb in this project.

Enter Arduino – the microcontroller

The Arduino project can be traced back to 2003, with Masimmo Benzi, along with fellow students at the Interaction Design Institute Ivrea, attempting to create a range of microcontrollers that is economical for students and professionals. Today, it is a leading manufacturer of open source hardware, and has a wide consumer base.

From what I read, it’s useful in creating a large number of projects, which I shall put to test, starting with this project. Here, I have used an Arduino Uno to control the LCD screen and LED bulb’s behaviour.

Assembling the hardware

The main components used in this project are: an LED(3V), a 220 Ω resistor, a 16×2 LCD character display (Hitachi HD44780), a solderless breadboard, a 10 KΩ potentiometer(for brightness control), an Arduino Uno, and an A/B USB 2.0 cable (for connecting the Arduino board to the computer).

A few optional but useful tools include: a table lamp, a wire cutter, a penknife, and a pair of tweezers (to pull out wire stubs in case a wire snaps).

Here’s the circuit’s breadboard view:

The data inputs of the LCD screen are assigned values from D0 to D7. In this project, I have used the 4-bit mode of operation, since only 4 data lines have been used.

It is recommended that the wires to the character display be soldered, as it avoids unnecessary data loss at its input pins.

Getting the code right

You may obtain the program code via this link to my GitHub repository:


The Arduino IDE is required to compile the program, and upload it to the microcontroller board. Here is the link to its download page:


Fire up the sytem!

Finally, the A/B cable is connected to the computer’s USB port. Once the Arduino board has been correctly identified, along with its COM port, the program is uploaded, and the result obtained is something like this:

Notice that the Arduino board’s built-in LED (the dot next to the red LED bulb) flashes at the same frequency as the latter.

The potentiometer’s knob will require some toggling in order to get the correct brightness for the LCD screen.

Change the arguments of delay(), print() and setCursor(), and see how it alters the output. Is it expected, unusual, or dramatic?

Further, if I want a scrollable display, what hardware/software modifications do I need? Leave your suggestions in the comment section!

Related Links

Fundamentals of Liquid Crystal Display; a white paper:


The History of the Light Bulb; an article:


LED Basics; a video:


Arduino – Troubleshooting: