Magic with Chrome’s Omnibar

It’s been a while since I last posted here (2 years!), but I’ve decided to try to post more often. I’ll probably migrate to another platform soon (from WordPress to Jekyll, probably), so that should help a bit.

There’s a “hack” on Chrome that I’ve been using for the last months and that I find amazing: setting Google’s Feeling Lucky search mode as the default search engine on Chrome’s Omnibar. This will also work with any other browser that supports setting a custom search engine (so, almost all of them).

As you will know, the Omnibar (a.k.a. navigation or search bar) allows entering both URLs to websites and search queries. Here’s a neat trick: if the default search engine is Feeling Lucky, when you enter a search term it will redirect you to the most likely answer if it’s confident enough about it, else it will just open the regular Google Search page. Cases where this is amazing: visiting websites without typing the full URL (ok, we already got that from the auto-search in a most visited URLs list), finding answers to questions in i.e. StackOverflow at light speed, pasting titles of academic papers and getting them in PDF directly… you name it!

Here’s how to do it: go to Chrome settings, click on the search engine manager, and edit one of them (why isn’t there an “add” button?) to read:

{google:baseURL}search?ie={inputEncoding}&q=%s&btnI=

It’s all about this “btnI” parameter. Name it however you want, and set it as the default search engine. Feels like magic! If you are not using Chrome, you can use:

https://google.com/search?q=%s&btnI=

Now, a last tip: for cases in which you need a full list of options, take your good old Google search engine and set its trigger (the middle column) to something short, like a comma. That way, when you focus the Omnibar (Cmd+L or Ctrl-L), you can trigger a regular Google search by pressing comma and space, followed by the search term. I also have additional shortcuts with mnemonics like v+comma for Youtube videos, m+comma for Google Maps, etc., although I never remember to use them.

Have fun!

 

Javascript Event Loop

I went to a meetup yesterday and there was a bit of confusion about how Javascript asynchronous code work and how to deal with it. When people read about asynchronous code, they tend to think of threads being dispatched, and code being interrupted with callbacks. This leads to the well-known concurrency management issues, and one has to create locks, and all that sort of things.

With Javascript (and any single threaded event-looped system), things are a bit different. Code runs in uninterrupted “bursts”. There is an event queue, and each burst corresponds to one event on the queue being processed. And the “event loop” is an infinite loop that keeps popping events from the queue as they arrive and running their associated bursts, or callbacks. On the browser things get a little bit more complicated because of the rendering loop, but this mental model is still valid.

What is an event? We are talking about low-level events here. It’s not those nice .on() events, although the library implementing them can force a new low-level event to be created instead of just calling the callback. Some examples of low-level events are timers, network connections, disk I/O or user input.

The important thing here is that in order to keep the app responsive, the bursts should be as small as possible, because nothing will be processed until the previous burst has finished. In nodeJS, I/O, for instance, is natively split in separate bursts, which is great. One burst calls I/O, and then we can carry on processing others while we wait for the response, which will trigger another burst. We can even have separate bursts for chunks of data being read or written.

What happens if the main script has a chunk of code which is very slow (let’s say, because it runs an iterative algorithm that takes a while to complete)? The whole app is going to slow down because of this, and there is no way to put it in a separate thread. The answer is easy: run each iteration of the algorithm in a separate burst, or at least the whole chunk in its own burst. The first thing that a code needs in order to become a burst is to be a separate function. Then, instead of calling the function, call a low-level event dispatcher; for instance, set a timer that will fire in 0 milliseconds. This will defer the execution of the function until another burst.

The tricky thing here is that there might be another events on the queue, so you can’t guarantee when the code will actually run. That’s why you should supply a callback, and call it at the end of the slow function.

Notice that the slow function is not a low-level event by itself; the event is the timer, which has got a callback (the same way as we provide one to the slow function) that will run within the same burst.

The UNIX Directory Structure

I considered myself as familiar with Linux. I wouldn’t say proficient, but I can get around it, compile stuff, and so. I thought that really getting into it was a matter of spending many hours hacking around. But the Startup Engineering course from Coursera has shown me that there are many basic things to learn, and that some of them are not that hard to catch up.

One of the things that has amazed me the most is the Unix Filesystem, which I’m gonna summarize here. It turns out that is doesn’t only make sense, but it’s great from a SysAdmin perspective.

So, we have a root directory everything hangs from, the famous /. A bunch of directories should hang from it. Remember that “hanging” doesn’t necessarily mean that the files are one inside the other physically. After all, on the disk everything is sequential. As we will see, some of the folders are actually virtual and represent entities that can be thought of as files.

Most of the directories contain system files, which were created when installing the OS and are only read during normal operation. Some examples are: /bin (binaries required at boot time: ls, cp…), /sbin (binaries required at boot time run by administrators: mount…) or /lib.

Some directories can be classified as “virtual”, because they represent things other than traditional files: /dev (access to devices), /proc (access to processes). There is also /tmp, that contains temporary files wiped out during reboots. Two other of them, /mount and /mnt, contain mount points to other disks. That is, they actually point to files, but these files are in an external hard drive, or in a USB stick. Is here where disk images are mounted as well, and this can happen while installing certain software packages.

The most known is /home, where every user owns a subfolder only accessible by him and by the administrators and where he puts all the documents into.

The interesting part comes with /var, /usr, /etc and /opt. /var contains variable files, that is, files that vary a lot, such as caches or logs. There is also /var/tmp, which contains long-term temporary files that won’t get erased on reboot. It looks like databases tend to be stored in /etc, obeying to a priority rule: information in /var is not very prioritary (logs are only accessed from time to time and nothing happens if one is lost), whereas information in /etc is very important, as we’ll see in a moment.

/etc contains “other stuff”, and in typical scenarios that means system-wide configuration files. They don’t vary a lot, are not part of the core OS and are not part of any program. Keeping them separate form the binaries allows an easy upgrade of packages without breaking the fine-tuning.

The trickiest one is /usr. Before anything, it comes from user but it’s got no obvious relation. if the root directory is the first layer of the executable part of the system, /usr would be the second one. It has /usr/bin, /usr/sbin, /usr/lib, and they contain programs that are part of the distribution but aren’t really needed. For instance, inside /usr/lib is where one would find python, perl, ruby…

/usr has one very special child, /usr/local. This directory is initially empty, and is where the administrator can place links to other executables in order to make them available system-wide. They are usually organized in /usr/local/bin, etc. This would be a third layer of executable stuff.

And last but no least, /opt contains software packages that are too complex or too big to be inside /usr; for instance, the Tomcat Java web engine. Notice that the binaries of these packages need to be either linked to /usr/local/bin or added to the PATH.

If the machine is a server, /home would be typically be very small, and instead a /srv directory would hold all files to be served over the network (or to be interpreted when a request is received).

Does all of this make sense to you? It does for me, and I think that it separates very well the assets in terms of priority, importance and acces type. This allows a lot of optimizations if the system is big. For instance, since /var is rewritten a lot of times it makes sense to mount it on a disk that has a very high write speed. /etc, on the other side, doesn’t need that write speed, but it’s important to make sure that data won’t get corrupted; the same applies for /home. If I had to choose two directories for making backups or placing a RAID, it would be these two. /tmp, for instance, is usually physically located at the RAM, instead of at the disk.

Hello world!

Hi everybody!

It’s about time I open a blog and start posting my findings. I’ve been thinking on this for some time; I come across very interesting material every day, which ranges from cutting-edge Javascript libraries to MATLAB tricks for extreme performance. I’m very thankful for all resources I’ve found on the Internet, and it’s time to pay back.

So, stay tuned!