Friday, August 12, 2011

Android Performance Testing

1. Intro

I'm building a distributed testing infrastructure on top of KATS for a DARPA project, and we had a need for performance monitoring of CPU, memory, and process profiling information (including context switches) throughout a test run. MADARA already has a library called MAML which allows for quick Python script development to instrument a phone via the Android Monkeyrunner tool, but Monkeyrunner doesn't really provide for performance profiling. So, how do you quickly and easily retrieve a summary of CPU and memory utilization on your Android phone?

2. Solution

The information is available through several utilities in the Android Debug Bridge including top and the varied information stored inside of the /proc directory. What I've done is make these more accessible through additions to the open-source maml.py library and the new maal.py (Madara Android ADB Library) which does not require Monkeyrunner at all.

The MAAL provides much of the same functionality that MAML does, but is much slower with keyevents (I will fix this by reusing the same shell session, but it isn't a priority right now). MAAL and MAML also have a new library function called print_device_stats which allows for printing both a long form and a one line summary for CPU and memory usage.

Three scripts have also been added to utilise these libraries and provide general-purpose testing information for Android smartphone programmers. For example, the following is a detailed view of the current memory and CPU usage on a Motorola Droid in our lab:


2.1. maal_monitor.py

The command line arguments for maal_monitor.py are available by passing -h or --help to the script. The following script execution monitors performance for 1 iteration (-n 1) and prints the top 10 cpu-intensive processes running on the phone.


./maal_monitor.py -p 10 -n 1
Memory: 5892 kB free of 230852 kB
User 4%, System 6%, IOW 0%, IRQ 0%
User 15 + Nice 0 + Sys 21 + Idle 273 + IOW 0 + IRQ 0 + SIRQ 0 = 309

  PID CPU% S  #THR     VSS     RSS PCY UID      Name
 1021   4% S    57 215056K  56096K  fg system   system_server
30994   3% S    20 140436K  24940K  bg app_24   edu.vu.isis.ammo.spotreport
23827   2% R     1    876K    392K  fg shell    top
  177   0% S     1      0K      0K  fg root     omap2_mcspi
    5   0% S     1      0K      0K  fg root     events/0
  995   0% S     2   1272K    128K  fg compass  /system/bin/akmd2
 1053   0% S     1      0K      0K  fg root     tiwlan_wq
  160   0% S     1      0K      0K  fg root     cqueue
  180   0% S     1      0K      0K  fg root     cpcap_irq/0
  238   0% S     1      0K      0K  fg root     ksuspend_usbd


The summarized view looks like this for maal_monitor.py:


./maal_monitor.py -p 10 -n 1 -s
Memory: 5952 kB free of 230852 kB. CPU: Total 6% (User: 2% Sys: 4%)


Maal_monitor.py is great for taking periodic measurements, but this may be too coarse-grained and may be too inaccurate with regards to CPU utilisation for your testing needs (with maal_monitor.py, we're essentially polling every five seconds for current utilization, which is approximated). If you need hard numbers for cpu usage, context switches, number of processes launched, etc., I provide a separate set of scripts.



2.2. maal_proc_stats.py and maal_stats_cmp.py

These scripts are generally used in the following way: 1) call maal_proc_stats.py with an outfile location to your storage drive, 2) run your test, 3) call maal_proc_stats.py with an outfile location that is different from #1, and 4) call maal_stats_cmp.py on the two files created in #1 and #3 to get the performance difference for your test.

The output for maal_proc_stats.py will look like the following:


./maal_proc_stats.py -s
Clockticks: User 5451112. System 5357590. IO 566. 
Processes: Num 56061. Switches 812051296.


The first line shows the clock ticks spent in user processes, system processes and dispatching IO (it does not count idle ticks). The second line shows the number of processes that have been launched since phone boot, and the number of context switches since boot.

Now, after creating two files with this script according to the process above noted in #1 and #3, you can process the performance information that changed during the test by running the following:


./maal_stats_cmp.py --infile1 first.stats --infile2 second.stats
Clockticks: User 629. System 393. IO 0. 
Processes: Num 23. Switches 35566.


Not only can you use maal_stats_cmp.py on the output from maal_proc_stats.py, but you can also process the difference between copies of the /proc/stats file (you just need to adb pull these files to your computer or something, if that's what you would like to do).

With these numbers in place, you should be able to configure systems like BuildBot or your favorite scoreboarding system to display test results via threshold values based on known good test run resource usages. If changes have caused huge CPU or memory spikes, they should show up in one or both of these MAAL performance logging methodologies.

3. Downloads

MAAL and associated scripts

MAML and any open-source scripts
MADARA KATS for synchronizing and coordinating testing processes

Wednesday, June 1, 2011

Android Monkeyrunner and the Google ADB: a lament

Intro

So, for the past couple of months, I've been trying to get Android Monkeyrunner to cooperate for distributed automated testing, but it has been an uphill battle... against an entrenched army of monkeys armed with bazookas. I wanted the Monkeyrunner library to work well, but I get the feeling that Monkeyrunner has not been tested or used much.

The honeymoon

My experience with Monkeyrunner a month or two ago didn't start out all bad. The Monkeyrunner press function works much faster than doing "adb shell input keyevent" calls (likely due to a new shell being launched with every invocation and no option to chain together a long string of keyevents in the same session), and I got a glimpse of how easy smart phone automation could be without writing customized Java Unit tests or installing Robotium on the phone. I could just send KeyEvents to the phone, type a string, and even connect 2 to 8 phones to our servers and launch MonkeyRunner tests in parallel (more on problems with this later). With Monkeyrunner, I could instruct a non-technical person on how to write a test based simply on how they would use a directional pad and keyboard to navigate around the activity.

Aaaaaand we don't even cuddle anymore

The first problem with MonkeyRunner for me came in the form of the type function being broken when the space key is used. This is not unique to Monkeyrunner. It appears that adb shell input text suffers from a similar problem. There may be several other KeyEvents (other than spaces) that fall into this particular hazard, but I was able to get around the issue for now by removing spaces from the text to be sent and inserting KEYCODE_SPACE where appropriate.

There were a couple of other problems with MonkeyRunner that kept cropping up. First, there is very little support for debugging the state of the activity you are trying to instrument.

You can't even get information on whether or not the activity has crashed without going back to adb and logcat. You can't form KeyEvent pairings that select an entire EditText without long clicks, but long clicks are hard to emulate when the EditText could be in a different location on the screen due to portrait or landscape modes, or even because the screen resolution is different between two phones.

You can't press two buttons at once because the DOWN type in the press method is apparently mapped directly to DOWN_AND_UP. Basically, the shift is unpressed immediately after you get out of the press function, regardless of what you pass it. This caused some headaches when trying to select all text, but it was manageable. No automation killer problem found yet... until Tuesday...

Monkeyrunner is a racist... that's a software library that causes race conditions, right?

On Tuesday came the worst problem, which drove me to try to rewrite the Monkeyrunner library without modifying the Android Debug Bridge. There is a race condition in the MonkeyRunner WaitForConnection method that occurs when you try to wait for multiple phones at once (even from separate heavy weight processes). The only way to really witness this issue is when you have an automated system trying to launch activities on 2 to 8 phones at once (humans take milliseconds or seconds to launch each by hand, so the race condition is hard for a manual tester to catch). The WaitForConnection method will cause random behavior on one of the phones while opening the other one without a problem for a moment. Then the automation on all phones halts. The issue is very weird.

We got around this for a short term fix by ensuring that we always waited 1 second after the previous phone launched before starting its automation (via the KATS process life cycle). While this works, it is not ideal. We wanted to launch 2-8 phones at once per server (as many USB connections as we can do right now) and see if there were any race conditions involving the phones connecting or disseminating to the server. With this race condition in Monkeyrunner and our subsequent fix of trying to sleep in between each phone launch, it's likely that the phones will have 1 second of difference between sending, which means we can't test everything that we what we want to test.

What's most frustrating about this is that the problem is not on my end, and I can't seem to find any fix to this without modifying the Android code base.

Monkeyrunner withdrawal

To try to address the issue, I rewrote my entire Python scripting library which wrapped Monkeyrunner to instead use nothing but ADB under the hood. It started out promising. First, the adb equivalent of WaitForConnection was much, much faster (basically, I just used adb get-state. The WaitForConnection method must be establishing an actual session with the phone, and this is probably where the race condition is occurring (during the session creation, which is almost certainly not thread safe). So, far so good. Actually, the entire library was a breeze to write.

Then I run it... and the adb shell input keyevent command inserts those 1 second delays in between every KEYCODE_DPAD_LEFT, backspace, menu, etc. A 15 second MonkeyRunner test is extended to hundreds of seconds when using adb shell input keyevent. The culprit with the adb shell is probably that a separate shell session is started with each invocation - rather than queuing the events to the target phone and returning immediately. I can understand this not being the default behavior, but I can't really understand why an asynchronous or a queuing version isn't available.

A lamentable conclusion

Being able to send KeyEvents to an Android phone is pretty awesome. I hope that the Google folks either fix the race conditions in MonkeyRunner or they fix the delays in adb shell so we can send KeyEvents at decent speeds. For the moment though, this is my Monkeyrunner sad face :(

Library files

The wrappers around the Monkeyrunner and ADB interfaces are linked below. The library is called the MADARA Android Monkeyrunner Library (MAML).

MAML sans Monkeyrunner
MAML original

Tuesday, May 31, 2011

The KaRL Automated Testing Suite

So, we've submitted our first paper highlighting the KaRL Automated Testing Suite (KATS) to GPCE 2011, and the features of the toolset have really blossomed in the past month. KATS is a suite of tools that automate distributed deployment and testing in a cross platform way. This means that you can use KATS on a hybrid test bed with Windows and POSIX machines, and each of the machines will work together to accomplish distributed, automated testing.

The core of the KATS system is the KaRL reasoning engine, which provides the testing suite with a distributed knowledge and reasoning engine based on the anonymous publish/subscribe paradigm. The infrastructure is consequently host-agnostic, resulting in the ability to move tests between hosts without much difficulty. Tests can be started via cron jobs, and they will barrier and synchronize if needed.

One of the more interesting parts to the KATS system is the Generic Modeling Environment (GME) paradigm for visually modeling tests. You can read more about how to obtain and use KATS and its GME paradigm at the following links.

Links:

We're currently using KATS to model and execute distributed tests for smart phones and C++ services connected to and running on various host platforms. You can find out more at the links above.

Thursday, April 21, 2011

Android Monkeyrunner

I'm currently working on a project that requires automated testing of Android applications. Fortunately, Google has released a python API for manipulating Android devices, applications, intents, etc. called Monkeyrunner, but the help has been especially lacking. One reason for this is because the command that the Monkeyrunner project page says will generate the API documentation doesn't work because help.py does not appear to be provided in the Android SDK. This blog post will remedy that situation.

According to the Google project site, you should be able to run the following command to generate the API docs for Monkeyrunner:


monkeyrunner <format> help.py <outfile>


Unfortunately, the help.py file does not appear to exist. So, I've created a help.py file that will give you all the capabilities of the old help.py, if it ever existed. Copy and paste the following into a new file called help.py on your computer (or download the file from here):


help.py file to create on your computer

#!/usr/bin/env python

# Imports the monkeyrunner modules used by this program
from com.android.monkeyrunner import MonkeyRunner, MonkeyDevice
from optparse import OptionParser

def help_callback (option, opt, value, parser):
parser.print_help ();

parser = OptionParser (add_help_option=False)
parser.add_option ("-o", "-f", "--outfile", dest="outfile", default="help.html",
help="file to output monkeyrunner help to",
metavar="OUTFILE")
parser.add_option ("-t", "--type", dest="type", default="html",
help="type of output to generate (html or text)",
metavar="TYPE")
parser.add_option ("-h", "--help", dest="help", default=None,
action="callback", callback=help_callback,
help="show usage information for this script")

(options, args) = parser.parse_args ()

if options.help is None:
text = MonkeyRunner.help(options.type)
f = open(options.outfile, 'w')
f.write(text)
f.close()

print "\nMonkeyrunner help written to " + options.outfile + " (type:" \
+ options.type + ")\n"




The file comes with its own help and usage information, which you can access by providing a -h or --h option like so:


monkeyrunner help.py -h


By default, the help.py sets the output file to help.html and sets the type of output to html. Feel free to use this to generate the Monkeyrunner built-in help for reference on your system.

Monday, March 14, 2011

LaTeX in a Nutshell (#1)

Introduction

Writing research papers in industry tends to involve one of two text formats: Word and LaTeX. From my experience, Word is still dominating the creation of technical reports, papers, etc., but LaTeX was essentially written for programmers and technical researchers who want an extensible, programmable paper format. This blog entry is intended to group together examples and descriptions of features. Hopefully, the blog series will be appealing to beginner to intermediate LaTeX users.

Starting Out

Like C++ or Java, you are probably going to break your main project into pieces. If you are collaborating with other authors or researchers, this will be especially essential, as it will allow you to each work on and revise different sections of the paper at the same time with no conflicts! Importing other LaTeX files is easy. Let's start with a simple example of a paper with three sections: abstract, solution, and experiments. We use the ACM Conference Proceeding document class to format the document for submission to an ACM conference. Be sure to download the .cls files into the same directory as your document!

\documentclass{acm_proc_article-sp} 

% package includes.
% For now, including graphicx for images is enough
\usepackage{graphicx}

% begin document signals the beginning of rendering
% anything before this point is just metadata or package includes
\begin{document}

% title and author information. Note how we specify
% two different authors (James Edmondson and John Smith)
\title{Research Paper}
\numberofauthors{2}
\author{
  \alignauthor James Edmondson\\
    \affaddr{Vanderbilt University}\\
    \email{james.r.edmondson@vanderbilt.edu}
  \alignauthor John Smith\\
    \affaddr{Vanderbilt University}\\
    \email{john.q.smith@vanderbilt.edu}
}

% Render the author and title information first
\maketitle

% These are macros that we can use in tables to provide
% extra space at the top (\T) when under a horizontal line
% and at the bottom (\B) when on top of a horizontal line
\newcommand\T{\rule{0pt}{2.6ex}}
\newcommand\B{\rule[-1.2ex]{0pt}{0pt}}
 
% include the three sections
\input{abstract}
\input{solution}
\input{experiments}

% include the bibliography
\bibliographystyle{abbrv}
\bibliography{master}
\end{document}

This main file, which is frequently called the name of the targeted conference might be called technicalreport.tex. The three included files would need to be called abstract.tex, solution.tex, and experiments.tex. At the end of the file, we include a bibliography called master.bib.

The three input files can just be a paragraph, if you like, but the master.bib file should have a format. The best part about using LaTeX is that most research portals like ACM, IEEE, and Citeuseek provide LaTeX bibliography files for you to directly copy into  your master.bib file. For instance, here is a complete listing of papers by E.W. Dijkstra. The best part about bibliographies (called Bibtex) in LaTeX is that when you change the documentclass and the bibliographystyle, the bibliography is automatically formatted to the specification of your target conference or journal. Anyone who has had to do this in Word knows how difficult this can be with most Word Processors.

Another nice feature of Bibtex is that the bibliography will only print a bibliography for papers or journals that are actually cited in the paper. In this way, you can build a master bibliography file that has all of the papers you have ever read and use it between all of your papers without any problems.

Notes

Try to avoid using underscores (_) or percent (%) in your bibliography. If you do use these, make sure you escape the sequence with a backslash (\).

Links for Further Reading

Check out these links if you would want to find out some specifics that may not be covered in this blog series.


Sunday, March 6, 2011

Undocumented ACE_OS::sleep caveats

For those in need of sleep in microseconds, understand that Windows provides no such mechanism.


Intro

Recently, I needed a methodology for setting hertz publication rate on a publisher that would work in both Linux and Windows. The publication rate should be able to go up to mhz at least, which requires a sleep mechanism capable of 1,000,000,00 ns / 1,000,000 == 1,000 ns of precision. Consequently, the sleep would be required to function on a microsecond level.


Tools and methodologies

I decided to stick with the ACE library and specifically use the ACE_OS::sleep(const ACE_Time_Value &) call. On the surface, this should allow us to sleep for microseconds, and it does - with one small caveat: the operating system needs to have a sleep mechanism that is capable of actual us (microsecond) precision.


Problems

In WIN32 mode, the ACE_OS:sleep call uses the ::Sleep method provided by the Windows operating system. Unfortunately, ::Sleep works on millisecond precision. This means that you either blast (e.g. no sleep statement at all), or you can specify a hertz rate of <= 1 khz (1ms of sleep).



Solutions

One potential solution is bursting events and then sleeping for 1ms. The trick to this is to work out a bursting pattern that uses the sleep to sum all the microseconds that should have been done over that period. This isn't modeling exactly what you want, but the alternative is to simply only allow bursting or <= 1khz. In other words, there is no beautiful, portable solution to this that isn't going to cause stress on whatever you are trying to test (bursting is always a worst case for any software library).



Downloads

KaRL Dissemination Test - Tuned to burst mode on Windows and simply sleep for microseconds on POSIX.

Saturday, March 5, 2011

For loops just aren't what they used to be

Sometimes, compilers are too damned good at optimization.

Intro

My PhD dissertation currently centers around a knowledge and reasoning engine and middleware called KaRL, part of my Madara toolsuite. In a recent paper, I wanted to do some performance testing of the KaRL distributed reasoner, and so I attacked the testing from three vectors: reasoning throughput (the number of rules per second the engine could perform without distributed knowledge), dissemination throughput (the number of rules per second sent over the wire in a LAN), and dissemination latency.

To make things more interesting, I decided to form a baseline for reasoning throughput. How about C++ optimal performance with a for loop and reinforcements (e.g. ++var). Oh, and it needs to be portable across Windows and Linux. Easy enough, right?


Problems, Solutions, and More Problems

The first problem on the docket was one of timer precision. I decided to go with ACE_High_Res_Timer, after some unsuccessful and highly error prone usage of the underlying gethrtime. After using the High_Res_Timer class, so it corrects for global scale factor issues between the return values of QueryPerformanceCounter(). So far, so good.

The results on my Linux and Windows machines were right in line with what I expected. Through function inlining, expression tree caching, and various other mechanisms, we are able to efficiently parse KaRL logics at greater than 1 Mhz. However, when I started comparing to my supposed baseline, I discovered that the ACE_High_Res_Timer was reporting that the optimized C++ for loop of ++var was performing at an amazing 60 Ghz to over 1 Thz... on a 2.5 Ghz processor.

What the heck was going on here?

It turns out that modern C++ compilers will completely optimize out for loops if they can. My specific issue, which remains unsolved in a portable manner, was in regards to a for loop with a simple accumulator (var) which is incremented a certain number of times. I had started a timer before the for loop and stopped it after the loop was over, but the assembly language generated from the C++ programs had 0 for loops in the function. In fact, they simply moved the final value that the loop would have had into the var. The timer was effectively reporting the time it took to query the system for the nanosecond precision timers, since the couple of assembly instructions included were not enough to amount to any nanoseconds at all.


Remarks on Known Solutions

In Visual Studio, I was able to circumvent the issue in two ways: first, by using __asym { nop }, which effectively inserts a no-op (an exchange of eax with itself), and second, by using volatile, which means the compiler is not able to optimize at all and can't fully take advantage of registers.

In g++, unfortunately, I was only able to use volatile, which means that if I wanted to test the actual loop, I have to take away every other optimization that the compiler might be able to do.

Using volatile turns out to be the only portable thing I could think of. Internet searching seemed to confirm these suspicions. I would think there would be some way to specifically tell each compiler to simply not optimize out for loops in a particular function or file though.


Downloads

Solution, which unfortunately can't get around L3 optimization in g++ and Release mode in Visual Studio.