pkhamre.blog

thoughts, devops, tools and stuff.

The Anti-todo List

Image by MrPessimist

For a couple of years ago I tried out the pomodoro technique and was quite happy with the workflow. Even though it gave good results for work done during my work hours, I found it hard to consistently use the technique over time and eventually stopped using it.

About a year ago I started using Octopress to write a simple log of the work I had done each day. This was really simple to accomplish and I quickly created a morning routine to initialize a new post which I would fill with all the work I finished through the day. The last thing I do before I leave the office is to publish the work log for the current day. Walking out of the office knowing I have finished several tasks simply makes me feel more productive and a lot better.

Instead of pre-planning what I am gonna work with through the day, I just do it!

This is what I think of as the concept of the anti-todo list.

Just do it, and write down what you did.

Maintenance Windows in Pingdom Checks

Pingdom does not offer functionality in their web interface to have maintenance windows. As of right now, we have a blocking backup of our MySQL database every night. It is annying to get alerts from Pingdom because of this, so I had to create a simple solution by using the Pingdom API. The following Ruby-script sends a simple HTTP-request to the Pingdom API and disables all available checks in Pingdom. I have configured this to run through cron every night, and it solves my problem with receiving alerts during backups. In the long run, we will of course eliminate the downtime by setting up non-blocking backup with e.g. Percona Xtrabackup

pingdom-maintenance-window.rb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
#!/usr/bin/env ruby

require 'em-http'
require 'em-http/middleware/json_response'
require 'optparse'
require 'yaml'

config = YAML.load_file 'pingdom.yml'

pingdom_user   = config['username']
pingdom_pass   = config['password']
pingdom_appkey = config['appkey']

host = 'https://api.pingdom.com'
request_options = {
  :path => '/api/2.0/checks',
  :head => {
    'accept-encoding' => 'gzip, compressed',
    'app-key'         => pingdom_appkey,
    'authorization'   => [pingdom_user, pingdom_pass]
  }
}

optparse = OptionParser.new do |opts|
  opts.banner = "usage: #{$0} [options]"

  opts.on('-h', '--help', 'display this message') do
    puts opts
    exit
  end

  opts.on('-p', '--pause', 'pause all checks') do
    request_options[:body] = 'paused=true'
  end

  opts.on('-u', '--unpause', 'unpause all checks') do
    request_options[:body] = 'paused=false'
  end
end

optparse.parse!

EventMachine.run do
  EventMachine::HttpRequest.use EventMachine::Middleware::JSONResponse

  http = EventMachine::HttpRequest.new host

  request = http.put request_options

  request.callback do
    if request.response.has_key? 'error'
      puts request.response
      exit 1
    else
      EventMachine.stop_event_loop
    end
  end
end
pingdom.yml
1
2
3
username: '<pingdom username>'
password: '<pingdom password>'
appkey: 'pingdom application key>'

Logging to Logstash JSON Format in Nginx

Inspired by the logstash cookbook on logging to JSON-format in Apache, I made a similar nginx log_format to make nginx log to Logstash JSON-format as well. The configuration is quite similar to the Apache configuration, but nginx got more sensible variable names. The configuration below needs to be included within the http-context in nginx.

1
2
3
4
5
6
7
8
9
10
11
log_format logstash_json '{ "@timestamp": "$time_iso8601", '
                         '"@fields": { '
                         '"remote_addr": "$remote_addr", '
                         '"remote_user": "$remote_user", '
                         '"body_bytes_sent": "$body_bytes_sent", '
                         '"request_time": "$request_time", '
                         '"status": "$status", '
                         '"request": "$request", '
                         '"request_method": "$request_method", '
                         '"http_referrer": "$http_referer", '
                         '"http_user_agent": "$http_user_agent" } }';

Then use something like this in your server configuration.

1
access_log /var/log/nginx/www.example.org-access.json logstash_json;

A simple logstash.conf for demo-purposes.

1
2
3
4
5
6
7
8
9
10
11
12
13
input {
  file {
    path => "/var/log/nginx/www.example.org-access.json"
    type => nginx

    # This format tells logstash to expect 'logstash' json events from the file.
    format => json_event
  }
}

output {
  stdout { debug => true }
}

Running the logstash agent gives the following output.

Note: The output is “beautified” with JSONLint.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ java -jar logstash-1.1.1-monolithic.jar agent -f logstash.conf
{
    "@source"=>"unknown",
    "@type"=>"nginx",
    "@tags"=>[],
    "@fields"=>{
        "remote_addr"=>"192.168.0.1",
        "remote_user"=>"-",
        "body_bytes_sent"=>"13988",
        "request_time"=>"0.122",
        "status"=>"200",
        "request"=>"GET /some/url HTTP/1.1",
        "request_method"=>"GET",
        "http_referrer"=>"http://www.example.org/some/url",
        "http_user_agent"=>"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.79 Safari/537.1"
    },
    "@timestamp"=>"2012-08-23T10:49:14+02:00"
}

Thanks to @jordansissel and @ripienaar for their awesome work on the apache cookbook.

Understanding StatsD and Graphite

After a short conversation with BryanWB_ on the #logstash channel at Freenode, I realized that I did not know how my data was sent and how it was stored in Graphite. I knew that StatsD collects and aggregates my metrics. And I knew that StatsD ships them off to Graphite. Which I knew stores the time-series data and enables us to render graphs based on these data.

What I did not know was if my http-access graphs displayed requests per second, average requests per retention or anything else.

It was time to research how these things worked in order to get a complete understanding.

StatsD

To get a full understanding of how StatsD works, I started to read the source code. I knew StatsD was a simple application, but I did not knew it was this simple. Just over 300 lines of code in the main script and around 150 lines in the graphite backend code.

Concepts in StatsD

StatsD has a few concepts listed in the documentation that should be understood.

Buckets

Each stat is in its own “bucket”. They are not predefined anywhere. Buckets can be named anything that will translate to Graphite (periods make folders, etc)

Values

Each stat will have a value. How it is interpreted depends on modifiers. In general values should be integer.

Flush interval

After the flush interval timeout (default 10 seconds), stats are aggregated and sent to an upstream backend service.

Metric types

Counters

Counters are simple. It adds a value to a bucket and stays in memory until the flush interval.

Lets take a look at the source code that generates the counter stats that gets flushed to the backend.

1
2
3
4
5
6
7
8
9
for (key in counters) {
  var value = counters[key];
  var valuePerSecond = value / (flushInterval / 1000); // calculate "per second" rate

  statString += 'stats.'        + key + ' ' + valuePerSecond + ' ' + ts + "\n";
  statString += 'stats_counts.' + key + ' ' + value          + ' ' + ts + "\n";

  numStats += 1;
}

First, StatsD iterates over any counters received, where it starts by assigning two variables. One variable holds the counter value, and one variable holds the per-second value. It then adds the values to the statString and increases the numStats variable.

If you have the default flush interval, 10 seconds, and send StatsD 7 increments on a counter with the flush interval, the counter would be 7 and the per-second value would be 0.7. No magic.

Timers

Timers collects numbers. They does not necessarily need to contain a value of time. You can collect bytes read, number of objects in some storage, or anything that is a number. A good thing about timer, is that you get the mean, the sum, the count, the upper and the lower values for free. Feed StatsD a timer and this gets automatically calculated for you before it is flushed to Graphite. Oh, I almost forgot to mention that you also get the 90 percentile calculated for the mean, sum and upper values as well. You can also configure StatsD to use an array of numbers as percentiles, which means you can get 50 percentile, 90 percentile and 95 percentile calculated for you if you want.

The source code for timer stats is a bit more advanced than the code for the counters.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
for (key in timers) {
  if (timers[key].length > 0) {
    var values = timers[key].sort(function (a,b) { return a-b; });
    var count = values.length;
    var min = values[0];
    var max = values[count - 1];

    var cumulativeValues = [min];
    for (var i = 1; i < count; i++) {
        cumulativeValues.push(values[i] + cumulativeValues[i-1]);
    }

    var sum = min;
    var mean = min;
    var maxAtThreshold = max;

    var message = "";

    var key2;

    for (key2 in pctThreshold) {
      var pct = pctThreshold[key2];
      if (count > 1) {
        var thresholdIndex = Math.round(((100 - pct) / 100) * count);
        var numInThreshold = count - thresholdIndex;

        maxAtThreshold = values[numInThreshold - 1];
        sum = cumulativeValues[numInThreshold - 1];
        mean = sum / numInThreshold;
      }

      var clean_pct = '' + pct;
      clean_pct.replace('.', '_');
      message += 'stats.timers.' + key + '.mean_'  + clean_pct + ' ' + mean           + ' ' + ts + "\n";
      message += 'stats.timers.' + key + '.upper_' + clean_pct + ' ' + maxAtThreshold + ' ' + ts + "\n";
      message += 'stats.timers.' + key + '.sum_' + clean_pct + ' ' + sum + ' ' + ts + "\n";
    }

    sum = cumulativeValues[count-1];
    mean = sum / count;

    message += 'stats.timers.' + key + '.upper ' + max   + ' ' + ts + "\n";
    message += 'stats.timers.' + key + '.lower ' + min   + ' ' + ts + "\n";
    message += 'stats.timers.' + key + '.count ' + count + ' ' + ts + "\n";
    message += 'stats.timers.' + key + '.sum ' + sum  + ' ' + ts + "\n";
    message += 'stats.timers.' + key + '.mean ' + mean + ' ' + ts + "\n";
    statString += message;

    numStats += 1;
  }
}

StatsD iterates over each timer and processes the timer if the value is above 0. It then sorts the array of values and simply counts it and locates the minimum and maximum values. An array of the cumulative values is created and a few variables are assigned before it starts to iterate over the percentile thresholds array to calculate percentiles and creates the messages to assign to the statString variable. When percentile calculation is done, the final sum gets assigned and the final statString is created.

If you send the following timer values to StatsD during the default flush interval

  • 450
  • 120
  • 553
  • 994
  • 334
  • 844
  • 675
  • 496

StatsD will calculate the following values

  • mean_90 496
  • upper_90 844
  • sum_90 3472
  • upper 994
  • lower 120
  • count 8
  • sum 4466
  • mean 558.25

Gauges

A gauge simply indicates an arbitrary value at a point in time and is the most simple type in StatsD. It just takes any number and ships it to the backend.

The source code for gauge stats is just four lines.

1
2
3
4
for (key in gauges) {
  statString += 'stats.gauges.' + key + ' ' + gauges[key] + ' ' + ts + "\n";
  numStats += 1;
}

Feed StatsD a number and it sends it unprocessed to the backend. A thing to note is that only the last value of a gauge during a flush interval is flushed to the backend. That means that if you send the following gauge values to StatsD during a flush interval

  • 643
  • 754
  • 583

The only value that gets flushed to the backend is 583. The value of this gauge will be kept in memory in StatsD and be sent to the backend at the end of every flush interval.

Graphite

Now that we know how our data is sent from StatsD, lets take a look at how it is stored and processed in Graphite.

Overview

In the Graphite documentation we can find the Graphite overview. It sums up Graphite with these two simple points.

  • Graphite stores numeric time-series data.
  • Graphite renders graphs of this data on demand.

Graphite consists of three parts.

  • carbon - a daemon that listens for time-series data.
  • whisper - a simple database library for storing time-series data.
  • webapp - a (Django) webapp that renders graphs on demand.

The format for time-series data in graphite looks like this

1
<key> <numeric value> <timestamp>

Storage schemas

Graphite uses configurable storage schemas too define retention rates for storing data. It matches data paths with a pattern and tells what frequency and history for our data to store.

The following configuration example is taken from the StatsD documentation.

1
2
3
[stats]
pattern = ^stats\..*
retentions = 10:2160,60:10080,600:262974

Which means these retentions will be used for every entry with a key matching the pattern defined. The retention format is frequency:history. So this configuration lets us store 10 second data for 6 hours, 1 minute data for 1 week, and 10 minute data for 5 years.

Visualizing a timer in Graphite

Knowing all this, we can now take a look at my simple ruby-script that collects timings for a HTTP requests.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
#!/usr/bin/env ruby

require 'rubygems' if RUBY_VERSION < '1.9.0'
require './statsdclient.rb'
require 'typhoeus'

Statsd.host = 'localhost'
Statsd.port = 8125

def to_ms time
  (1000 * time).to_i
end

while true
  start_time = Time.now.to_f

  resp = Typhoeus::Request.get 'http://www.example.org/system/information'

  end_time = Time.now.to_f

  elapsed_time = (1000 * end_time) - (to_ms start_time)
  response_time = to_ms resp.time
  start_transfer_time = to_ms resp.start_transfer_time
  app_connect_time = to_ms resp.app_connect_time
  pretransfer_time = to_ms resp.pretransfer_time
  connect_time = to_ms resp.connect_time
  name_lookup_time = to_ms resp.name_lookup_time

  Statsd.timing('http_request.elapsed_time', elapsed_time)
  Statsd.timing('http_request.response_time', response_time)
  Statsd.timing('http_request.start_transfer_time', start_transfer_time)
  Statsd.timing('http_request.app_connect_time', app_connect_time)
  Statsd.timing('http_request.pretransfer_time', pretransfer_time)
  Statsd.timing('http_request.connect_time', connect_time)
  Statsd.timing('http_request.name_lookup_time', name_lookup_time)

  sleep 10
end

Lets take a look at the visualized Graphite render from this data. The data is from the last 2 minutes, and the elapsed_time target from our script above.

Image visualization

Render URL

Render URL used for the image below.

1
/render/?width=586&height=308&from=-2minutes&target=stats.timers.http_request.elapsed_time.sum
Rendered image from Graphite

Rendered image from Graphite, a simple graph visualizing elapsed_time for http requests over time.

JSON-data

Render URL

Render URL used for the JSON-data below.

1
/render/?width=586&height=308&from=-2minutes&target=stats.timers.http_request.elapsed_time.sum&format=json
JSON-output from Graphite

In the results below, we can see the raw data from Graphite. It is data from 12 different data points which means 2 minutes with the StatsD 10-second flush interval. It is really this simple, Graphite just visualizes its data.

The JSON-data is beautified with JSONLint for viewing purposes.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
[
    {
        "target": "stats.timers.http_request.elapsed_time.sum",
        "datapoints": [
            [
                53.449951171875,
                1343038130
            ],
            [
                50.3916015625,
                1343038140
            ],
            [
                50.1357421875,
                1343038150
            ],
            [
                39.601806640625,
                1343038160
            ],
            [
                41.5263671875,
                1343038170
            ],
            [
                34.3974609375,
                1343038180
            ],
            [
                36.3818359375,
                1343038190
            ],
            [
                35.009033203125,
                1343038200
            ],
            [
                37.0087890625,
                1343038210
            ],
            [
                38.486572265625,
                1343038220
            ],
            [
                45.66064453125,
                1343038230
            ],
            [
                null,
                1343038240
            ]
        ]
    }
]

Visualizing a gauge in Graphite

The following simple script ships a gauge to StatsD, simulating a number of user registrations.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#!/usr/bin/env ruby

require './statsdclient.rb'

Statsd.host = 'localhost'
Statsd.port = 8125

user_registrations = 1

while true
  user_registrations += Random.rand 128

  Statsd.gauge('user_registrations', user_registrations)

  sleep 10
end

Image visualization - Number of user registrations

Render URL

Render URL used for the image below.

1
/render/?width=586&height=308&from=-20minutes&target=stats.gauges.user_registrations
Rendered image from Graphite

Another simple graph, just showing the total number of registrations.

Image visualization - Number of user registrations per minute

By using the derivative-function in Graphite, we can get the number of user registrations per minute.

Render URL

Render URL used for the image below.

1
/render/?width=586&height=308&from=-20minutes&target=derivative(stats.gauges.user_registrations)
Rendered image from Graphite

A graph based on the same data as above, but with the derivative function applied to visualize a per-minute rate.

Conclusion

Knowing more about how StatsD and Graphite works, it will be alot easier to know what kind of data to ship StatsD, to know how to ship the data to StatsD, and to know how to read the data from Graphite.

Got any comments or questions? Let me know in the comment section below.

Speed Reading - How I Started to Read Faster and More

Image by paulbence

The past 2 years I have read a total of around 4 books. In a few days I have finished 3 books in about 2 weeks, with the last week being extremely more read effective than the first. Why did my reading change so drastically?

It was really simple! I read a few articles on the internet about speed reading and I applied the following techniques to my reading.

Stop pronouncing each word in your head

This is called sub-vocalization. It is a normal habit to have, and it means that you pronounce each word in your head as you read them. Stop doing this. Just sweep through the entire sentence without doing this. Practice.

Do not reread

Another common habit is to stop reading and skip backwards to reread words. Stop doing this.

Use a pointer

Use your pen or your finger to sweep through the reading material so you can get a consistent eye motion. This is essential, and was the key for getting me to read faster.

Practice

To get better at reading, read more. It is as simple as that. Start reading a book now. This blog will still be here when you are finished with it.

How many books do you read? What techniques do you use when you are reading?

Visualizing Logdata With Logstash, Statsd and Graphite

Inspired and passionate

Inspired by Etsy’s blogpost Measure Anything, Measure Everything I have given metrics and how to extract them alot of thought. I work at Mintra and I am responsible for operations of our LMS application written in java. The team I work on consists of mostly developers. So much of my work also includes building bridges between operations and developers.

After reading about Metrics and watching codahale’s presentation on Metrics I instantly realized that I had to get this into our system. It did not take me long to find out that the rest of my team was not sharing the same passion as me about implementing Metrics. Do not get me wrong. They love to visualize metrics and the idea to measure our application, but Metrics just did not seem like the right thing for them.

I had to come up with something.

Logstash

Logstash. A tool for managing events and logs. I had played a bit with it earlier, but I did not know I could gather metrics from logs and ship them to statsd (or Graphite). It was this article in the Logstash documentation that made me realize that this was what I was looking for.

A few weeks ago, one of our developers added new log entries to our application log to debug problems we have encountered with indexing.

Log entries with Lucene indexing metrics
1
2
3
4
04 Jul 09:56:01,088 INFO  LuceneIndex-JMS - Indexing on MASTER took (sync: true): 3602
04 Jul 09:56:10,969 INFO  LuceneIndex-JMS - Indexing on MASTER took (sync: true): 2922
04 Jul 09:56:38,762 INFO  LuceneIndex-JMS - Indexing on MASTER took (sync: true): 2697
04 Jul 09:56:43,985 INFO  LuceneIndex-JMS - Indexing on MASTER took (sync: true): 2706

These logs were the perfect entry point for us to start visualizing what our application does in production.

On the server writing these logs I set up a simple Logstash agent that reads the logfile, filters it with a few grok-filters I wrote, and then ships these metrics to statsd. Collection and aggregation of metrics is done in statsd and then statsd ships them off to Graphite where we can use the render-url API to visualize what is happening.

Logstash Graphite shipper agent - logstash-graphite.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
input {
  file {
    path => '/var/log/tomcat/lucene-jms.log'
    type => 'indexing-stats'
  }

  file {
    path => '/var/log/tomcat/access.log'
    type => 'access-log'
  }
}

filter {
  grok {
    type => 'indexing-stats'
    patterns_dir => '/home/user/logstash/patterns'
    pattern => '%{LUCENEJMS}'
  }

  grok {
    type => 'access-log'
    pattern => '%{COMBINEDAPACHELOG}'
  }
}

output {
  statsd {
    host => 'graphite.example.org'
    count => [ "tomcat.bytes", "%{bytes}" ]
  }

  statsd {
    host => 'graphite.example.org'
    increment => "tomcat.response.%{response}"
  }

  statsd {
    host => 'graphite.example.org'
    timer => [ "tomcat.indextime", "%{indextime}" ]
  }
}

Our application did not have a standard date format so I had to write these simple grok filters.

Grok patterns for Logstash - /home/user/logstash/patterns/application.grok
1
2
LOG4JTIME %{MONTHDAY} %{MONTH} %{TIME}
LUCENEJMS %{LOG4JTIME} %{WORD:severity} %{DATA:message} %{NUMBER:indextime}

The installations of statsd and Graphite we used are completely standard, no custom configuration. So getting these up and running will be up to you. There is lots of resources out there about these, so search those up if you need information about them.

Visualizing with Graphite

With all the components up and running we can now visualize the metrics with Graphite. The Graphite render-url API is stacked up with different functions that can be used to visualize data.

Here are a few examples.

Total index time

Index time with Holt Winters forecast

Index time with summarized data

Grok patterns for Logstash - /home/user/logstash/patterns/application.grok
1
2
Green line - Average index time per hour
Red line - Maximum index time per hour

Index time with standard deviation

Grok patterns for Logstash - /home/user/logstash/patterns/application.grok
1
2
Green line - Standard deviation for the past 10 datapoints
Red line - Standard deviation for the past 100 datapoints

Index time with moving average

Grok patterns for Logstash - /home/user/logstash/patterns/application.grok
1
2
Green line - Average for the past 10 datapoints
Red line - Average for the past 100 datapoints

Do you measure your application?

Sensu - Standalone Checks

It has been a few months since I first read about Sensu in the Devops Weekly newsletter. If you do not know what Sensu is, it is a very small and scalable open source monitoring framework. Sensu is written in Ruby and you can find the sourcecode on Github. Joe Miller has written several good blogposts about Sensu so I will not give any further introduction here.

Standalone checks for Sensu

After hanging around on the #sensu IRC-channel on Freenode for a while, I discovered a nice feature which is, by the time of writing, undocumented. Oh well, it is documented quite good in my IRC logs when I come to think about it.

<@portertech> xerxas: sensu doesn’t currently do config/check disco due to security concerns, there is support for “standalone” checks, they only have to be on the clients as the client schedules its own execution, extra data is sent to the server along w/ the result

<@portertech> adding “standalone”: true to a check definition makes this happen

Standalone checks is great for my sensu setup. It is checks that you only need to define on the client, and the client will schedule the execution of the check before it sends the result to the server. Note that the handler will still be executed on the sensu-server.

Client configuration

As portertech wrote, it is quite simple to add standalone checks. Add the following file to a sensu-client, and it should just work.

/etc/sensu/conf.d/check_http_server.json
1
2
3
4
5
6
7
8
9
10
11
12
13
{
  "checks": {
    "check_http_server": {
      "notification": "example.org HTTP port 8080",
      "command": "PATH=$PATH:/usr/lib/nagios/plugins/ check_http -H example.org -u /path/ -p 8080",
      "subscribers": [ "base" ],
      "standalone": true,
      "interval": 120,
      "occurrences": 4,
      "handlers": ["default"]
    }
  }
}

Creating Graphs With Gruff

Gruff is a graphing library to make beautiful graphs with ruby. It’s quite simple and easy to use. To use it, you need the gruff gem installed.

1
$ gem install gruff
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#!/usr/bin/env ruby

require 'rubygems' if RUBY_VERSION < '1.9.0'
require 'csv'
require 'gruff'

unless ARGV.length == 1
  puts "usage: #{$0} path/to/data.csv"
  exit 0
end

csv_data = CSV.read(ARGV[0], :headers => true, :header_converters => :symbol, :return_headers => false)

graph = Gruff::Line.new
graph.title = "My cool graph"

csv_data.headers.each do |graph_item|
  csv_data[graph_item] = csv_data[graph_item].flatten.collect { |i| i.to_i }
  graph.data(graph_item.to_s, csv_data[graph_item])
end

graph.write 'sample.png'

With a CSV file like the following

1
2
3
4
5
6
7
8
9
runtime,responsetime,transfer,appconnect,pretranser,connect,namelookup
19,18,18,0,8,8,1
10,9,9,0,3,3,1
8,7,7,0,3,3,1
8,7,7,0,3,3,1
8,7,7,0,3,3,1
14,12,12,0,3,3,1
11,9,9,0,5,5,1
8,7,7,0,3,3,1

You would get a graph like this

Unlock Your Screen With Udev

To prevent my colleagues to “accidentally” visit my irc-client when I am away from the keyboard, I use xlock to lock my screen. Who doesn’t?!

Typing my password everytime I need to unlock the screen is too much hassle, so I wrote this simple udev-rule to unlock my screen when I plug my phone into the computer via usb.

Find the device path to the phone

First, run ‘udevadm monitor –property’ and plug in the phone to find the devicepath.

1
2
3
4
5
6
7
8
9
10
11
$ udevadm monitor --property
monitor will print the received events for:
UDEV - the event which udev sends out after rule processing
KERNEL - the kernel uevent

KERNEL[7893.790637] add      /devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.4/1-1.4.3 (usb)
UDEV_LOG=3
ACTION=add
DEVPATH=/devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.4/1-1.4.3
SUBSYSTEM=usb
.....

Find a unique identifier for the phone

Then run ‘udevadm info -a –path=/devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.4/1-1.4.3’ to find the unique identifiers.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
$ udevadm info -a --path=/devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.4/1-1.4.3

Udevadm info starts with the device specified by the devpath and then
walks up the chain of parent devices. It prints for every device
found, all possible attributes in the udev rules key format.
A rule to match, can be composed by the attributes of the device
and the attributes from one single parent device.

  looking at device '/devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.4/1-1.4.3':
    KERNEL=="1-1.4.3"
    SUBSYSTEM=="usb"
    DRIVER=="usb"
    ATTR{configuration}==""
    ATTR{bNumInterfaces}==" 1"
    ATTR{bConfigurationValue}=="1"
    ATTR{bmAttributes}=="c0"
    ATTR{bMaxPower}=="500mA"
    ATTR{urbnum}=="799"
    ATTR{idVendor}=="0bb4"
    ATTR{idProduct}=="0ff9"
    ATTR{bcdDevice}=="0226"
    ATTR{bDeviceClass}=="00"
    ATTR{bDeviceSubClass}=="00"
    ATTR{bDeviceProtocol}=="00"
    ATTR{bNumConfigurations}=="1"
    ATTR{bMaxPacketSize0}=="64"
    ATTR{speed}=="480"
    ATTR{busnum}=="1"
    ATTR{devnum}=="31"
    ATTR{devpath}=="1.4.3"
    ATTR{version}==" 2.00"
    ATTR{maxchild}=="0"
    ATTR{quirks}=="0x0"
    ATTR{avoid_reset_quirk}=="0"
    ATTR{authorized}=="1"
    ATTR{manufacturer}=="HTC"
    ATTR{product}=="Android Phone"
    ATTR{serial}=="HT03YPL*****"
........

Based on the data we found, I wrote this simple udev-rule that will unlockWkill xlock when the phone is plugged in to my computer via USB.

/etc/udev/rules.d/51-htc-desire.rules

1
2
3
4
# ---
# HTC Desire screensaver unlock
SUBSYSTEM=="usb", ACTION=="add", ATTRS{idVendor}=="0bb4", ATTRS{serial}=="HT03YPL*****", RUN+="/usr/bin/pkill -9 -f xlock"
# ---