Saturday, November 01, 2014

Editing ad-hoc config files on Linux

I had a need to edit /etc/network/interfaces file on Ubuntu (from my program). At first I found some awk script on the web, which claimed to do the job, but when I tried, it wasn't addressing all the CRUD cases that I am interested in (also, I didn't want to have system() calls from my C code).
So, I searched for a better utility, and found Augeas - this is fantastic! you have to try it to believe! It has a neat command line utility (augtool) as well as an easy to use C API (+ bindings for many scripting languages, including my fav Lua too :-) ).

The following commands show how easy it is to add/rm an interface (XPATH expressions)

$ sudo augtool # Add an interface at the end (last)
augtool> set /files/etc/network/interfaces/iface[last()+1] eth1
augtool> set /files/etc/network/interfaces/iface[last()]/family inet
augtool> set /files/etc/network/interfaces/iface[last()]/method static
augtool> set /files/etc/network/interfaces/iface[last()]/address
augtool> save
Saved 1 file(s)

$ sudo augtool  # Edit the added interface (by name, not position)
augtool> set /files/etc/network/interfaces/iface[. = 'eth1']/netmask
augtool> save
Saved 1 file(s)
augtool> set /files/etc/network/interfaces/iface[. = 'eth1']/mtu 1500
augtool> save
Saved 1 file(s)

$ sudo cat /etc/network/interfaces 
auto lo
iface lo inet loopback
iface eth1 inet dhcp
   mtu 1500

$ sudo augtool # Lets just delete eth1 now
augtool> rm /files/etc/network/interfaces/iface[. = 'eth1']
rm : /files/etc/network/interfaces/iface[. = 'eth1'] 6  <-- 6="" augtool="" fields="" removed=""> save
Saved 1 file(s)

Now, the same/similar exercise programmatically:

Monday, October 27, 2014

Elixir: Functional |> Concurrent |> Pragmatic |> Fun

Since I have been subscribed to Pragmatic Bookshelf, one book which I have been waiting to read (for months), is Programming Elixir: Functional |> Concurrent |> Pragmatic |> Fun by Dave Thomas (remember The Pragmatic Programmer?).
There seems to be a lot of excitement around the language, and books being written even before the official release of the language is out. Elixir is developed by Jose Valim - a core developer of Rails, so the cool features of Ruby are expected.

Erlang (concurrent, reliable but hard!) is still popular, and has been used to write many cool (and stable) software, some popular - WhatsApp, OTP, and some well known only in the networking world - ConfD. Here is a nice commentary on Elixir by Joe Armstrong, creator of Erlang

Amazon says the book arrives on 30th - I have signed-up to be notified! cant wait to get the book! (and to get a hold on concurrent programming on the Erlang VM) :)

Wednesday, August 27, 2014

Simple asynchronous task spawn from shell

Yesterday was a hard day at work! I was meddling with some shell (bash) script, where I had to make a CLI work; a CLI command which would restart self (the whole set of services, including the ones who spawned the CLI).

That was somehow not working! before the script could restart the services, it would get terminated! I did a lot of reading, and learnt lot many bash tricks in a day that I haven't learnt in a long time! :)

Now, for this issue, what I needed was a way to tell someone "Please restart me" because "I'm not able to restart myself!" - ah! asynchronous task/job.

I already had a daemon to execute a task, but since it was also part of the processes that get killed on restart - I could not use it. I have a simple shell script, and I needed no additional complexity, what could I do ? ... a little more digging on stackexchange led me to a simple solution.

Schedule a job to be executed at a later time (or immediately) using at!

So, my solution boiled down to a very simple change, instead of restart, I had to say

restart | at now 


Sunday, April 27, 2014

Merge Excel sheets ?

No problem!

A friend of mine had a problem, she had to merge two [huge] Excel workbooks, by matching names in one, with the names in another. Something like:

| Name1   |  field1 |  field2  |  ...     |
|  ...    |         |          |          |
|  ...    |         |          |          |

| field2_1| field2_2|  Name2   |  ...     |
|  ...    |         |          |          |
|  ...    |         |          |          |

If they were Database tables, then we could have done something on the lines of:

SELECT * FROM WorkBook1,WorkBook2 WHERE Name1=Name2;

But, these are excel sheets, and worse yet, the names in WorkBook1 are in format:
"FirstName LastName"
and names in WorkBook2 are in format:

(i.e, uppercase, and with comma) Duh! and there will be many names with 3-4 words - imagine the permutations.

Excel experts could say this can be solved with some cool macros, or maybe VB scripts!, but I am old-school Unix text guy!. I think in terms of text filters only!

To solve the problem of name matching, Take this [hypothetical] name:

Shankaran Ganesh Krishnan

the permutations will be:

Shankaran Krishnan Ganesh
Krishnan Shankaran Ganesh
Krishnan Ganesh Shankaran
Shankaran Ganesh Krishnan
Ganesh Shankaran Krishnan
Ganesh Krishnan Shankaran

Some names can also contain the initials [with a period], like:

Ganesh K. Shankaran

So, how can we do the name matching ? ... for a moment I thought of using a permuter
and then saving all permutations (stupid!), but that's not required!

Lets say we do the following,
 - Remove dots and commas
 - Change to lowercase (trim spaces too)
 - Sort the names by words

If you had "Shankaran, Ganesh Krishnan" in WorkBook1, and "GANESH, SHANKARAN KRISHNAN" in WorkBook2, then both will become: "ganesh krishnan shankaran"

Now, the only problem that remains, is to save the .xls as .csv, so that I can load it to Perl (Parse::CSV). Unfortunately, Excel doesn't have an option to save all the sheets in the workbook
at once to CSVs, I had to do that manually for each sheet and then merge. Other than that, its pretty straight forward.

If you are about to say: Show me teh codez!
here you go ...
What good is coding skills, if you cannot put it to use, at the right time, to help friends!? :-)

Wednesday, April 16, 2014

Recognize, and transform text

Many a times, I see some text, which is not in any known format. But, it looks vaguely familiar, or simple enough to transform. The reason I would want to transform, is of course to work with it, to load it into my scripting environment, to analyze, consume or apply some complex programmatic logic to it.

Here, I give some examples, and show the conversions. This could help in recognizing
raw text,  and transforming them to their closest [known] cousins.

Case 1

Lets start with something simple. If you see a file something like this:



and you want to load this into your preferred programming environment (like a
Python dict, Lua table, or a Perl hash) to work with it. As it stands, it is not in a format that is directly usable!, but if we can make a small change in the data, say change the line containing colon in the end, to [line].


Now, this is a valid .ini file format (popular in the Windows world). And, there are libraries
for most languages to load and work with INI files!

What you need, is a little Perl, or sed regex to convert from the former to the latter!. And
dont think about Jamie's popular quote and be afraid (for such simple cases, regex is a
good fit, but make sure you really understand regexes to weild one when needed)

Case 2

If you have seen some router configs (like JUNOS config), or some BibTeX entries, then the following
will be faimilar:

interface {
    eth0 {
      bia  aa:11:22:11:00:11;

Again, this may not be directly loadable into your environment, but see this again, doesnt it look
close to JSON ?, all you need to do is to ensure that the keys and values are quoted correctly


 "interface" : {
    "eth0" : {
      "ip4" : "",
      "bia" : "aa:11:22:11:00:11"

Or, Lua table:

interface = {
    eth0 = {
      ip4 = '',
      bia = 'aa:11:22:11:00:11'

again, both of these can be achieved with minimal changes.

Case 3

This might look very similar to Case 1, but observe the nesting and a richer data set!



Now, converting this to .ini doesn't seem to fit!, can we convert it to something else? say, I do this:

  sal: 20000
  age: 23
        current: engineer
                 - DevOps
                 - TAC

  sal: 21000
  age: 28
        current: engineer

Aha, now this is valid YAML! YAML, like JSON, is also fat-free-XML! and you have libraries
for all languages to load and work with YAML.

Case 4

We all know CSV, if you have seen simple spread-sheet data (think MS Excel), that's valid CSV. Also, spread-sheet editors give you an option to save it as plain CSV.

But what if the data were like this:


isn't it simple to change the delimiter to comma (',')? so that, you can work with CSV libraries.
Bonus - if you have to send the data to a suit, just attach it and they can open in a spreadsheet-editor! you know, suits frown on plain text attachments! :-/

Note: the regex should be careful enough to handle quoting! (that applies to all cases listed above)

To summarize, you don't need complicated parser to load text into your favorite language, to analyze it, or to apply programmatic transformations to it. All you need, is to recognize the format, and check which is the closest known format to which you can convert it to, so that you can conveniently work with it. The following table might make it easier to remember:

Text Easily converted to
Delimited (line oriented) CSV
Grouped, and simple key-value INI
Indented, multi level, with lists YAML
Brace nested, and key-value JSON/Py-dict/Lua-table

Tuesday, April 08, 2014

FIGlets ?

I had used Unix banner many times, but I had never bothered to check, how other cool looking typefaces were generated. Most often, on starting up some open source server/daemon, you'd come across a banner like:

 _ __ ___  _   _|  _ \  __ _  ___ _ __ ___   ___  _ __  
| '_ ` _ \| | | | | | |/ _` |/ _ \ '_ ` _ \ / _ \| '_ \ 
| | | | | | |_| | |_| | (_| |  __/ | | | | | (_) | | | |
|_| |_| |_|\__, |____/ \__,_|\___|_| |_| |_|\___/|_| |_|

Though I was sure these were not manually typed on an editor!, I never probed much. For some reason, today I wanted to put one such banner for my daemon, at start-up. So, on some Google digging, I found the source - FIGlet fonts. 

But no need to install the figlet utility, instead, try this web app - TAAG (Text to ASCII Art Generator). And, if you are working on an application - add a FIGlet banner ;-)

--EDIT-- (30-apr)

After playing around and having fun with FIGlets, I learnt about TOIlet :) (now, wait, hold your imagination), its FIGlet + filters, and how colorful!.

Look at the project page, its much more than just colorful banners!

Monday, February 17, 2014

Colorizing GDB backtrace

Being in a startup, we get to see a lot of core dumps, every day ;-). With too many stack frames and long file+function names, I hate the output!. In fact the visual clutter is so much, that it takes me long time to compare two tracebacks! I was thinking what can be done about it - hit upon gdb hooks in a SO post. Put together with my all time fav - Perl, to color the output, got some nice colorful tracebacks! ;-). The following is what I put in my ~/.gdbinit
shell mkfifo /tmp/colorPipe.gdb
# no pause/automore 
set height 0
# dont break line
set width 0
define hook-backtrace
        shell cat /tmp/colorPipe.gdb |  ~/ &
        set logging redirect on
        set logging on /tmp/colorPipe.gdb
define hookpost-backtrace
        set logging off
        set logging redirect off
        shell sleep 0.1
define hook-quit
        shell rm /tmp/colorPipe.gdb
define re
   echo \033[0m

And the Perl script ( to do the highlight:

Try it!, at least identifying the culprit will be much quicker, if not fixing the issue! ;-)

Tuesday, February 04, 2014

Using API docs as a test aid

I wrote about my custom documentation using Lua embedded in my C source. With the complete description of the function - its arguments [and their type], return type etc, can we do more with it?
I wanted some test stubs for the Lua APIs (that I am defining from my C back-end) and all I had to do, was re-define the mydoc function which will consume the same data, but instead of generating documentation, generate some Lua code to test the APIs.
i.e, if I have a function that takes a number and returns a string (on success) and nil on failure, I generate:
function Stub.test (n)
  assert(type(n) == 'number', 'n should be a number)
  if g_nil_path then return nil end
  return 'rand-str'

With this, I can see and get a feel of the Lua APIs that my C functions are exposing, also I can invoke the user scripts just to check the argument types for correctness, and I can exercise the success path as well as the failure path by setting the global g_nil_path. And of course for more randomness, I use 
 when I return a number and
string.rep(string.char(math.random(65,90)), math.random(1,10)) 
when returning strings.

Saturday, January 18, 2014

Using bash for a simple 'Domain Specific Micro Language'

I had bash scripts to manage some rules (like ACL), and I had it all in a single script, which kept growing with the addition of new rules. At one point a senior colleague of mine, suggested to split/separate them out into separate manageable files. I thought it was time to move the script to Perl, but then I thought, let me give bash one more shot! and I had some fun [after a long time] with bash.

I came up with this simple approach to model the rules, a file with rule and end keywords and all the required data, as key-value pairs within, like:


And then, a processor script to convert files like these to actual rules (suitable for consumption in our system). The skeleton of the processor script:
do_process() {
 for k in "${!vartbl[@]}"; do
  # use $k and ${vartbl[$k]}
declare -A vartbl

while read i; do
    if [[ $i =~ '# ' ]] ; then continue; fi # skip comments
    case $i in
           rule_id=$(($rule_id + 1))
           [[ $i =~ $split_kp_re ]]
done < $infile

Where $infile is the input file (can be read from command line).
As can be seen, the function do_process  which shall process one rule at a time, can use the values from the associative-array vartbl and write it out in any required format, and our files can have comments too! (lines beginning with a #) :-)
Note: bash 4.0 or later required for associative array support.