Wednesday, September 20, 2017

How does Checkinstall work


Today, while trying to build and install some software from source, I discovered
checkinstall
What it does: tries to get what make install does, and then creates a Debian (or rpm etc)
package. At first it might sound like Ah! not much, but think again ...
Makefile can have any kind of recipes, invocation of other shell commands, loops ...
how the #&$% can it know!?

I could not rest until I understood how this cool thing works! As usual, I tried to first Google if I can find some articles, but didn't find any. OK! no problem, I downloaded the sources and grok'd through it.

Here is a brief explanation:

- checkinstall relies on a utility called installwatch
- installwatch traps quite many libc calls like open() chown() unlink() etc and logs the operations in a parseable form
- This info is used by a shell wrapper to translate it to commands/formats for different package managers

Thursday, August 24, 2017

Enable tracing on bash scripts

With shell scripts, it becomes difficult at times to debug which command took too long, or at which step has an issue. Even though I add generous amount of debug logs there are times where -x (execution trace) is invaluable. But I don't want to litter my regular logs with the -x output either!
So, after a bit of search found this nice snippet of bash:

trace_file() {
    exec 4>>"$1"
    BASH_XTRACEFD=4
    set -x
}

With this function defined, I can invoke it in any script, with the path to trace file:

trace_file /tmp/mytrace

Monday, October 10, 2016

How (not) to write a parser

One of my favorite subjects during college days was Parsing and compiler construction (remember The dragon book?)Ever since the first time I tried lex/yacc (flex/bison) it was a mysterious piece of software that I wanted to master using! 

Zoom forward 10 years, a lot has changed in the field: LR(1) is passé, GLR and LL(1) are in-vogue (thanks to tools like ANTLR) 

When I thought about venturing into writing a parser for the YANG modelling language (RFC 6020), I first did some study+survey, and I formed some new opinions about parsing: 
- Don't roll out a RDP in desperation  
- No separate tokenization phase (scanner-less) 
- Not LR(1) - too limited 
- Not ANTLR! (though its a fantastic tool - it leans a lot towards Java!) 
- Not Bison GLR! (no proper documentation!) 
- Use PEG (yet another learning opportunity :-)) 
- Not C! (more about it later) 

PEG is cool, and with a love for regex PEG attracted me way too much :-) I tried many PEG based parser generators. 

To start with I picked up Waxeye , though it has a choice of many targets, the C port is not optimal (author himself told, that more work's needed). 

Then I tried peg/leg, though its very similar to lex/yacc in usage, I was finding it hard to debug/modify the grammar. Part of the problem could have also been the YANG ABNF grammar (as defined in the RFC). There are no reserved words in YANG. I tried peg/leg much more than Waxeye, but eventually I gave up once I realized that I was getting nowhere! So, C is gone - as I had only 2 options that had a C target!

Lua/LPeg: Though I'm familiar with Lua, this was my first tryst with LPeg. I had heard a lot of praise for Roberto's work, and I got to know why, quite soon. Within a few hours I was able to create a matcher (deliberately avoiding the word parser here) that could recognize 80% of the YANG files at my disposal! (had to tweak the matcher for the remaining 20%). This time, I chose a different approach, instead of using the YANG ABNF grammar, I created some simple relaxed rules. 

Here is a brief description of the approach taken.

YANG statements can be either of the form:

1.   identifier [value];

Or:

2.   identifier [value] {
            One or more of 1. (depending on identifier)
  }

Note that value is optional. Since there are no reserved words, it leads to a simple matcher (~30lines!). Even though there are no reserved words, we have some keywords, and based on that we should be able to say whether the nesting is right or wrong – I call this as the validation phase. And, that too, can be simplified to triviality, if we keep the AST as a simple Lua array (table).

For e.g: A leaf can be under a container, but not vice-versa:

Valid:
container C1{
  leaf l1 {
     type string;
  }
}

Invalid:
 leaf l1 {
  container C1 {
  }
}

I keep a list of allowed-children for each type of node. Additional checks are performed with custom functions per node-type. And, since these children are arranged as arrays – we can also ensure order.

That’s it! :-) - take a look at: https://github.com/aniruddha-a/lyang

Comments/feedback welcome.



Monday, December 14, 2015

Oh! But how would I know without 'running' it, if its ok?

Have you come across your teammates, checking in scripts without running it, with syntax errors, malformed XML documents, broken Makefiles .... well, there is some hope, you could help (by sharing this link! :-) )

Mostly, if its code in a compiled language (like C), developers at least compile the code (might not test!) before checking into version control. But, in case of XML documents, scripts etc, the code gets checked in, and it will not error out until some test case is completely run, or in the nightly regressions!

Its not really difficult to do some basic validation of scripts, Makefiles, or even XML documents before a check-in. Most languages/tools have an option to do basic checks - syntax checks, dry run (without actually executing the commands), bytecode generation etc. Here are some, which I often use:

Make

make -n

Does a dry-run, shows which targets would get built on an actual run.

Perl

perl -c

Does a compilation and says if syntax is OK or not (there are caveats with BEGIN blocks, but at least there is something!)

Python

pycompile

This is good, there is a separate utility to do the compilation! (just run pycompile on your scripts once before you proceed)

bash

bash -n

Again, syntax checks without actually executing anything.

XML

xmllint

The name says it all! Even though there are plenty of options, you could just run once [without even options] on your XML docs to see if its well-formed.

Lua

luac

Lua bytecode compiler, like pycompile. Generates luac.out which you can delete :-)

Monday, December 07, 2015

Recover a Locked Android without data loss

Note3Recover
Since I had to recover a Samsung Note 3, all the steps mostly lean towards Samsung phones.

Scenario

  • Password forgotten! (you may think that this cannot happen to you, but it surely can! :-) )
  • Android Device Manager password reset doesn't work (and it didn't BTW! at least I could give this a try as WiFi was on)
  • Phone not registered with Samsung Recovery (good tha they have an alternate to ADM!, unlike ADM, you need to explicitly register your device - the one I was handling wasn't registered)
  • Un-rooted phone, with stock Android recovery
  • USB debugging disabled!

For the impatient

TL;DR summary
  • Install Samsung CDC drivers
  • Get the right device code (this one is ha3g)
  • Get the right custom recovery image (TWRP or CWM)
  • Use a firmware flashing utility which works for you (Odin or Heimdall)
  • Flash the recovery image.
  • Boot into recovery mode, and delete the password files.
Thats it!

Details

But,... the devil is in the details!

USB drivers

Get the Samsung CDC drivers and make sure you OS detects the phone. (If you are on Windows 10, and want to connect with ADB - Samsung doesnt have ADB drivers for Windows 10!)

Device Code Name

Now, this is one confusing thing! all the custom ROMs refer/name their images by phone's code.
The Samsung Note 3, for e.g. has multiple versions - the Sprint, Verizon and International. At first glance it might seem like "Ah! I bought it in India, and its not tied to any carrier, it must be international",... sorry! not that simple - you should know the correct CPU and Model (In this case the version is ha3g and the CPU is called Exynos though its not mentioned anywhere on the box or manuals! the model is N900 - this is easy should show up when the phone starts up)

Firmware Flash utility

2 choices here:
  • Heimdall (FOSS, available on Github. Binaries packages for both Windows and Linux available)
  • Odin (Leaked [from Samsung] Windows application)

Tryst with Heimdall

Since I have a love for FOSS, I was hell-bent to get Heimdall to work, and it had a cool command-line! I downloaded and built the latest version from source! (on Ubuntu 14.04). But, it didn't work! I could not figure out what the problem was! Tried Windows binaries too, and with USB2.0 and 3.0 port (with 3.0 port it would'nt even detect the phone!)
(the good part of trying Heimdall: I got to know a little bit about partitions and PIT [Partition Information Table])
A note on USB versions: If you do not know how to recognize the ports: peep into the USB socket, BLUE means 3.0 and YELLOW is 2.0.

Odin, finally

Odin is supposedly too picky: picky about the port, the cable etc. From what I read, if in case the phone is not detected on one port, try changing to another port and try using a different cable (in my case I had the original USB cable that came with the phone, and it was a USB3.0 - it got recognized instantly)
Odin's messages and the UI aren't too friendly either!

Custom recovery image

Here, again there are quite a few choices:
  • ClockWorkMod (CWM) - Development Ceased
  • Team Win Recovery Project (TWRP) New and cool
  • CyanogenMod Recovery (CR)
I could not find the CWM image for ha3g (though I could not find one for hltexx - not Exynos!). CR is still new! Initially, tried with whichever version of TWRP image that I could get, but Odin wouldn't flash it! It would give this error message:
NAND Write Start!! 
FAIL!
At first I thought it had something to do with the NAND flash storage! but after a lot more research found that it could be due to the type of image being written. I had to do 2 things to be able to successfully flash:
  1. Get the latest TWRP (2.8.6.0 recovery image, bundled as a .tar)
  2. Extract the .img from the .tar and convert it to a .tar.md5 (I found a script on XDA forums which did this)
And, finally Odin could flash the image to mobile!
You should put the mobile to Download Mode to write to flash, and that is achieved by pressing down Volume-Down, Home and Power keys together on Samsung

Recovery mode

You need to know how to get to recovery mode first, press down: Volume-Up + Home + Power keys and hold till logo flashes (do not confuse with Download-mode!)
The catch here is: though we flashed the TWRP recovery, the phone tries to be smart and replace with stock recovery if you let it reboot by itself! The remedy is to boot into TWRP immediately after the flash! i.e., once flashed do not let it restart normally (un-check Auto-Reboot in Odin)

Ah, TWRP!

Now, this is cool! if you ever have seen the default Android recovery and then compare with TWRP!, its like comparing age old feature phones to the moden day touch phone! TWRP has touch interface and neat buttons, you can pretty much do away without reading any manuals - thanks to the neat, and simplified UI.
Play safe
First thing I did after I could get to TWRP was to insert a microSD card, and take a backup of data, so that I could continue my RnD (it takes a NANDroid backup, which it can restore).
Recover/Remove?
I tried to pull out the locksettings.db and run some SQLite SQL queries to get the MD5 encoded password, and salt. At this stage, I didn't want to go any further. So, I came back to TWRP and deleted the 2 key files /data/system/password.key and /data/system/gesture.key (though I knew it was password-locked and not gesture locked). On reboot - No password! :) Nothing lost - all contacts and data intact.

Thanks

  • To this post for motivation! (so many Samsung service centers told me that its not possible, and that factory-reset [with data loss] is the only option!)
  • To this guy for the script
  • For many other detailed posts, step-wise procedures and YouTube videos which Android lovers have patiently put together.
  • And, of course to TWRP! I donated $15 towards the development (~1K/-)

Monday, November 30, 2015

Common config that can be utilized by multiple languages

Often, I would want to separate out some config/initial data out of a program, and keep it in a way that is accessible to different parts of the system. If all parts of the system are in the same language, then the config can be very simple - just the function invocation with values:

For e.g. in shell:

   # Format:
   # Employee name role department
   Employee "Bheem"  "Fighter"  "Security"
   Employee "Chutki" "Hiring"   "HR"

Then, the invoker would just have to define the function Employee and source this file!

So far, so good if all parts were in shell. But, what if I want to use this config in some C code, and in Lua scripts etc. ?

(Don't think file-read/parse/split/tokenize .. No no! yuck!)

Can we define (somehow), a common format which is usable by different environments (languages), with minimal or no effort!? Observe the calling conventions:

C function call looks like:

   Employee (arg1, arg2 /* ... */)

Shell (is the simplest):

     Employee "arg1" "arg2" # ...

Lua:

   Employee (arg1, arg2) -- ...

But, there is a special notation in Lua, when the argument is a string or a hash table - we can get rid of the parentheses! like:

   Employee "arg1"

OK, but what if we have more than 1 argument ? Now, we need to get a bit deeper into function-chaining:

    function Employee (name)
        return function (role)
            return function (department)
               add_to_db(name, role, department)
            end
        end
    end    

Without going into much details about anonymous functions, Lexical scoping, upvalues etc - the function returns a function, which returns yet another function - that means, when you invoke

    Employee "Bheem"  "Fighter"  "Security"

The first function reads the first value, and returns its inner function, which reads the second value and that in turn returns another function which finally does some work (and can access all the 3 values - this is what we want)! Cool, ain't it? :-)

Wait, we solved the problem for shell and Lua, what about C code ?

In case of C code, we can always use the well-documented Lua C APIs to read Lua config! :)
(after all, Lua evolved from data-description language)

If you think, all this is too much work - think again, know the DRY principle.

Tuesday, September 22, 2015

vim plugins that I cant live without!

Here is a list of vim plugins, which are absolute time-savers!

file_line

How many times, you'd have copied and pasted a line from gcc error/warning, in the form
/path/to/file.ext:line:col
and then had to remove the :line and :col before editing it in vim!
this nice little plugin will open into the same line and place the cursor at the right column!

matchit

For C, of course the built-in % key does the trick of taking you to the matching brace/paren, but what about languages where you have begin .. end, or if .. fi - that is where this is really useful - overloads the % key for keyword-pairs!

Align

Align anything! declarations, comments, function-headers ... saves a lot of time!

Fugitive

I use git, and no day goes by without looking into git blame of one or the other source file! Its neat to see that from within vim and can navigate to complete diff [if I have to] by just hitting enter!

DrawIt

I like to keep everything under version-control, so all my docs are in text (Markdown) and checked into git. I use DrawIt to create cool ASCII diagrams :)

Tuesday, September 08, 2015

Enforcing Coding-Style checks on diffs


Have you seen how most of the code review [comment]s look like?

- This line exceeds 80 columns ...
- No space after ...
- No comma here ...
- Conditionals must have a block ...

Though these look insane, they ensure that code looks saner, when everyone follows the company's coding guidelines - unfortunately its not easy to enforce! Everyone has their own favorite editor and their own settings, add to this - there will be third-party code which has its own different style.

The best one can do is - when new code is added, can warn if the delta violates coding guidelines or not. Wait, delta? - diff ? How do we run checks on a diff!? Its not easy, but not too complicated either! at least for most common checks which the author should have figured out himself/herself before posting the code for review!

My approach: Simple Perl regex checks on diff hunks!

(If the mention of Perl + regex makes you feel nauseatic - stop here. :-))

Note: I tried this for C code diffs only.

We're not dealing with a function in its entirety, its just a diff, so how do we go about ?
of course, line-by-line :-/ duh!

  • Use unified diff, to get the context (file name, line range) 
  • Do some basic line merging logic when we are sure its not complete [1]
  • Check line by line, for basic style enforcement
    • length > 80?
    • trailing white-space ?
    • if/else not followed by a block ({) ?
    • etc...
  • Keep track of line numbers so that meaningful messages can be printed, pointing to the exact line where the issue is!

And how do we run this script automatically ?
Since I use git, git hooks! (just make it part of pre-commit hook)

Does it work well?
surprisingly well! :-)

[1] I used parentheses balance as a check to know if I need to merge lines or not.


--EDIT--

Do we need to do any extra work, than what's written above?
We do have to take care of  /* comments */ , and "strings"! regexes can be easily fooled, if we don't take enough care!

Can we make this fool-proof?
No! this can only be best-effort


Saturday, November 01, 2014

Editing ad-hoc config files on Linux

I had a need to edit /etc/network/interfaces file on Ubuntu (from my program). At first I found some awk script on the web, which claimed to do the job, but when I tried, it wasn't addressing all the CRUD cases that I am interested in (also, I didn't want to have system() calls from my C code).
So, I searched for a better utility, and found Augeas - this is fantastic! you have to try it to believe! It has a neat command line utility (augtool) as well as an easy to use C API (+ bindings for many scripting languages, including my fav Lua too :-) ).

The following commands show how easy it is to add/rm an interface (XPATH expressions)

$ sudo augtool # Add an interface at the end (last)
augtool> set /files/etc/network/interfaces/iface[last()+1] eth1
augtool> set /files/etc/network/interfaces/iface[last()]/family inet
augtool> set /files/etc/network/interfaces/iface[last()]/method static
augtool> set /files/etc/network/interfaces/iface[last()]/address 10.1.1.1
augtool> save
Saved 1 file(s)
augtool> 


$ sudo augtool  # Edit the added interface (by name, not position)
augtool> set /files/etc/network/interfaces/iface[. = 'eth1']/netmask 255.255.255.0
augtool> save
Saved 1 file(s)
augtool> set /files/etc/network/interfaces/iface[. = 'eth1']/mtu 1500
augtool> save
Saved 1 file(s)
augtool> 

$ sudo cat /etc/network/interfaces 
auto lo
iface lo inet loopback
iface eth1 inet dhcp
   address 10.1.1.1
   netmask 255.255.255.0
   mtu 1500

$ sudo augtool # Lets just delete eth1 now
augtool> rm /files/etc/network/interfaces/iface[. = 'eth1']
rm : /files/etc/network/interfaces/iface[. = 'eth1'] 6  <-- 6="" augtool="" fields="" removed=""> save
Saved 1 file(s)
augtool> 

Now, the same/similar exercise programmatically:

Monday, October 27, 2014

Elixir: Functional |> Concurrent |> Pragmatic |> Fun

Since I have been subscribed to Pragmatic Bookshelf, one book which I have been waiting to read (for months), is Programming Elixir: Functional |> Concurrent |> Pragmatic |> Fun by Dave Thomas (remember The Pragmatic Programmer?).
There seems to be a lot of excitement around the language, and books being written even before the official release of the language is out. Elixir is developed by Jose Valim - a core developer of Rails, so the cool features of Ruby are expected.

Erlang (concurrent, reliable but hard!) is still popular, and has been used to write many cool (and stable) software, some popular - WhatsApp, OTP, and some well known only in the networking world - ConfD. Here is a nice commentary on Elixir by Joe Armstrong, creator of Erlang
http://joearms.github.io/2013/05/31/a-week-with-elixir.html

Amazon says the book arrives on 30th - I have signed-up to be notified! cant wait to get the book! (and to get a hold on concurrent programming on the Erlang VM) :)

Wednesday, August 27, 2014

Simple asynchronous task spawn from shell

Yesterday was a hard day at work! I was meddling with some shell (bash) script, where I had to make a CLI work; a CLI command which would restart self (the whole set of services, including the ones who spawned the CLI).

That was somehow not working! before the script could restart the services, it would get terminated! I did a lot of reading, and learnt lot many bash tricks in a day that I haven't learnt in a long time! :)

Now, for this issue, what I needed was a way to tell someone "Please restart me" because "I'm not able to restart myself!" - ah! asynchronous task/job.

I already had a daemon to execute a task, but since it was also part of the processes that get killed on restart - I could not use it. I have a simple shell script, and I needed no additional complexity, what could I do ? ... a little more digging on stackexchange led me to a simple solution.

Schedule a job to be executed at a later time (or immediately) using at!

So, my solution boiled down to a very simple change, instead of restart, I had to say

restart | at now 

wow!

Sunday, April 27, 2014

Merge Excel sheets ?

No problem!

A friend of mine had a problem, she had to merge two [huge] Excel workbooks, by matching names in one, with the names in another. Something like:

WorkBook1
+---------+---------+----------+----------+
| Name1   |  field1 |  field2  |  ...     |
+---------+---------+----------+----------+
|  ...    |         |          |          |
+---------+---------+----------+----------+
|  ...    |         |          |          |
+---------+---------+----------+----------+

WorkBook2
+---------+---------+----------+----------+
| field2_1| field2_2|  Name2   |  ...     |
+---------+---------+----------+----------+
|  ...    |         |          |          |
+---------+---------+----------+----------+
|  ...    |         |          |          |
+---------+---------+----------+----------+


If they were Database tables, then we could have done something on the lines of:

SELECT * FROM WorkBook1,WorkBook2 WHERE Name1=Name2;

But, these are excel sheets, and worse yet, the names in WorkBook1 are in format:
"FirstName LastName"
and names in WorkBook2 are in format:
"LASTNAME, FIRSTNAME"

(i.e, uppercase, and with comma) Duh! and there will be many names with 3-4 words - imagine the permutations.

Excel experts could say this can be solved with some cool macros, or maybe VB scripts!, but I am old-school Unix text guy!. I think in terms of text filters only!

To solve the problem of name matching, Take this [hypothetical] name:

Shankaran Ganesh Krishnan

the permutations will be:

Shankaran Krishnan Ganesh
Krishnan Shankaran Ganesh
Krishnan Ganesh Shankaran
Shankaran Ganesh Krishnan
Ganesh Shankaran Krishnan
Ganesh Krishnan Shankaran

Some names can also contain the initials [with a period], like:

Ganesh K. Shankaran

So, how can we do the name matching ? ... for a moment I thought of using a permuter
and then saving all permutations (stupid!), but that's not required!

Lets say we do the following,
 - Remove dots and commas
 - Change to lowercase (trim spaces too)
 - Sort the names by words

If you had "Shankaran, Ganesh Krishnan" in WorkBook1, and "GANESH, SHANKARAN KRISHNAN" in WorkBook2, then both will become: "ganesh krishnan shankaran"

Now, the only problem that remains, is to save the .xls as .csv, so that I can load it to Perl (Parse::CSV). Unfortunately, Excel doesn't have an option to save all the sheets in the workbook
at once to CSVs, I had to do that manually for each sheet and then merge. Other than that, its pretty straight forward.

If you are about to say: Show me teh codez!
here you go ...
What good is coding skills, if you cannot put it to use, at the right time, to help friends!? :-)


Wednesday, April 16, 2014

Recognize, and transform text


Many a times, I see some text, which is not in any known format. But, it looks vaguely familiar, or simple enough to transform. The reason I would want to transform, is of course to work with it, to load it into my scripting environment, to analyze, consume or apply some complex programmatic logic to it.

Here, I give some examples, and show the conversions. This could help in recognizing
raw text,  and transforming them to their closest [known] cousins.

Case 1

Lets start with something simple. If you see a file something like this:

Alice:
sal=20000
age=23 
role=engineer

Bob:
sal=21000
age=28           
role=engineer

and you want to load this into your preferred programming environment (like a
Python dict, Lua table, or a Perl hash) to work with it. As it stands, it is not in a format that is directly usable!, but if we can make a small change in the data, say change the line containing colon in the end, to [line].

[Alice]
sal=20000
age=23 
role=engineer
  
[Bob]   
sal=21000
age=28           
role=engineer

Now, this is a valid .ini file format (popular in the Windows world). And, there are libraries
for most languages to load and work with INI files!

What you need, is a little Perl, or sed regex to convert from the former to the latter!. And
dont think about Jamie's popular quote and be afraid (for such simple cases, regex is a
good fit, but make sure you really understand regexes to weild one when needed)

Case 2

If you have seen some router configs (like JUNOS config), or some BibTeX entries, then the following
will be faimilar:

interface {
    eth0 {
      ip4  10.1.1.2;
      bia  aa:11:22:11:00:11;
    }
}

Again, this may not be directly loadable into your environment, but see this again, doesnt it look
close to JSON ?, all you need to do is to ensure that the keys and values are quoted correctly

As JSON:

{
 "interface" : {
    "eth0" : {
      "ip4" : "10.1.1.2",
      "bia" : "aa:11:22:11:00:11"
    }
 }
}

Or, Lua table:

interface = {
    eth0 = {
      ip4 = '10.1.1.2',
      bia = 'aa:11:22:11:00:11'
    }
}

again, both of these can be achieved with minimal changes.

Case 3

This might look very similar to Case 1, but observe the nesting and a richer data set!

[Alice]
sal=20000
age=23
role=[current=engineer;previous=DevOps,TAC]

[Bob]  
sal=21000
age=28          
role=[current=engineer;previous=]

Now, converting this to .ini doesn't seem to fit!, can we convert it to something else? say, I do this:

Alice:
  sal: 20000
  age: 23
  role:
        current: engineer
        previous:
                 - DevOps
                 - TAC

Bob:
  sal: 21000
  age: 28
  role:
        current: engineer
        previous:

Aha, now this is valid YAML! YAML, like JSON, is also fat-free-XML! and you have libraries
for all languages to load and work with YAML.

Case 4

We all know CSV, if you have seen simple spread-sheet data (think MS Excel), that's valid CSV. Also, spread-sheet editors give you an option to save it as plain CSV.

But what if the data were like this:

 a:b:c:"d"
or
 a|b|"c"|d

isn't it simple to change the delimiter to comma (',')? so that, you can work with CSV libraries.
Bonus - if you have to send the data to a suit, just attach it and they can open in a spreadsheet-editor! you know, suits frown on plain text attachments! :-/

Note: the regex should be careful enough to handle quoting! (that applies to all cases listed above)

To summarize, you don't need complicated parser to load text into your favorite language, to analyze it, or to apply programmatic transformations to it. All you need, is to recognize the format, and check which is the closest known format to which you can convert it to, so that you can conveniently work with it. The following table might make it easier to remember:

       
Text Easily converted to
Delimited (line oriented) CSV
Grouped, and simple key-value INI
Indented, multi level, with lists YAML
Brace nested, and key-value JSON/Py-dict/Lua-table

Tuesday, April 08, 2014

FIGlets ?

I had used Unix banner many times, but I had never bothered to check, how other cool looking typefaces were generated. Most often, on starting up some open source server/daemon, you'd come across a banner like:

                 ____                                   
 _ __ ___  _   _|  _ \  __ _  ___ _ __ ___   ___  _ __  
| '_ ` _ \| | | | | | |/ _` |/ _ \ '_ ` _ \ / _ \| '_ \ 
| | | | | | |_| | |_| | (_| |  __/ | | | | | (_) | | | |
|_| |_| |_|\__, |____/ \__,_|\___|_| |_| |_|\___/|_| |_|
           |___/                                        

Though I was sure these were not manually typed on an editor!, I never probed much. For some reason, today I wanted to put one such banner for my daemon, at start-up. So, on some Google digging, I found the source - FIGlet fonts. 

But no need to install the figlet utility, instead, try this web app - TAAG (Text to ASCII Art Generator). And, if you are working on an application - add a FIGlet banner ;-)

--EDIT-- (30-apr)

After playing around and having fun with FIGlets, I learnt about TOIlet :) (now, wait, hold your imagination), its FIGlet + filters, and how colorful!.

Look at the project page, its much more than just colorful banners!