Sunday, February 21, 2016

ACE recursive mutexes vs. STL recursive mutexes

In the MADARA engine, we must use recursive mutexes to protect the underlying dictionary-based shared knowledge. Since 2008 or so, we have been using ACE_Recursive_Thread_Mutex to protect our critical sections, but I have been following the C++11 spec with especial interest. The goal of MADARA is portability and speed across platforms like Windows, Linux, ARM, Intel, Mac, Android, etc. and ACE was a natural choice because of its platform support but also because of its well-tested code base and community development. Over the past five years or so, the community that supports and uses ACE has dwindled, and there has been a push within the C++ community to use libraries like Boost and STL mutexes, which are basically Boost libraries that have been standardized.

But for a middleware like MADARA that is especially concerned with performance on low-powered processors for robotics systems, it's not just about how excited the C++ community is about a particular library, it's also about speed and efficiency. So, to make our own decision on whether or not the C++11 spec was ready for primetime in portable middleware, I incorporated new features into the MADARA build process to allow for null mutexes (essentially no-ops that do not actually protect multi-threaded access), STL recursive mutexes, and our current usage of ACE recursive mutex in an extensible way. After seeing the results, I retrofitted test_reasoning_throughput (one of our standard tests for performance measurements on a target platform) to include breakdowns of C++ STL mutex and recursive mutex against the ACE implementations of ACE_Thread_Mutex and ACE_Recursive_Thread_Mutex.

First, the results of the direct comparisons of ACE mutexes and STL mutexes for g++ and Visual Studio 2015.

Settings
CPU: Intel® Core™ i7-4810MQ CPU @ 2.80GHz × 4
Linux: Ubuntu 14.04
g++ -v:Version Info
Windows: 7, Sp 1
Visual Studio: 2015
Results are reported in average nanoseconds per operation in 100k operations performed.


As you can see from the above direct comparison, the g++ STL C++11 mutexes are roughly the same performance as the ACE Recursive Mutexes. The Visual Studio 2015 performance is supposedly much better than the Visual Studio 2013 performance, but I could not get my installation of Visual Studio 2013 to handle the STL mutex library correctly at runtime (it compiled fine but just seemed to stall for no reason). For completeness, I've included a bunch of C++ operations with no mutex usage in the breakdown as well. This is information also printed in our test_reasoning_throughput test.

Now, MADARA itself performs knowledge and reasoning operations for shared information in a distributed system. The test_reasoning_throughput tests many simple operations on the MADARA knowledge bases, using these recursive mutexes often in nested ways. It also enforces quality-of-service policies and various checks about knowledge consistency, time, and various other attributes. In short, it does useful things within the critical section.

The following table uses the same hardware, operating systems, and compilers to check performance of basic operations in MADARA.


Most of our current generation of MADARA software uses KaRL containers (the last line in the above check). Another thing we optimize for are large Knowledge and Reasoning Language (KaRL) programs, which fall into the 2nd and 4th row of the above table. From these metrics, the ACE Recursive Thread Mutex is still the way for us to go. However, the performance of std::recursive_mutex in g++ is promising. Hopefully, the performance of the Visual Studio STL mutex library will catch up. After all, they have 30 years of open source code to look to for inspiration ideas... if they care to open a web browser.

Sunday, November 3, 2013

On MADARA Documentation

For those who have followed my blog, you probably know that my dissertation research project MADARA (Multi-Agent Distributed Adaptive Resource Allocation) is a pet project that I still put a lot of my weekends and nights into. The feature set for MADARA has grown radically since I finished my dissertation and the KaRL engine has become much more convenient, faster, and feature-rich. The result so far is a middleware that provides timing, speed, and quality-of-service like a heavy weight middleware with the ease-of-use of a scripting language (at least I hope so).

In truth, I feel like MADARA has been ready for prime time usage for several months now, but I'm a perfectionist, and I have been working diligently on the middleware layers to provide easier-to-use features, deeper functionality, and faster execution.

But as ridiculous as it may sound, features are not the core of prime time readiness for a middleware. Documentation is the key to wider usage. I can't tell you how many middlewares I've downloaded without documentation, and it's extremely difficult to use any tool to its maximum effectiveness without good guidance. As much as I've tried to focus on feature building in the past several months, I've spent an equal amount of time on commenting, tutorials, tests, and the new Wiki pages. If MADARA ever does impress enough people to find itself into mainstream usage, I think the documentation will be the key.

Just how much documentation have I done with the MADARA middleware? Let's start with the code documentation, which generates the doxygen documentation for the library:


MADARA contains over 66,000 lines of code right now (v1.1.13) and over 22,700 lines of commenting (for every 3 lines of code, there is one line of documentation). There are also over 18,000 blank lines to aid with code legibility, which I feel is just as important as documentation. To be clear, I try to document only what is unnecessary--nothing as inane as "int i // an integer". The majority of the commenting is done for function headers so doxygen and IDEs like Visual Studio can provide helpful tips on usage, such as precondition/postcondition information, parameter listing and definition, error information, and return value descriptions. And these lines are equally as important to the middleware as code lines. To me, MADARA sits at 107,000 lines of code with a healthy proportion of nearly 2/5 dedicated to documentation and readability.

But code documentation is only 2/5 the battle when it comes to user presentation. Another important aspect is code examples, which I've worked diligently to provide in the tests and tutorials directories of the code base. Here are the cloc results from these two directories.

The tests directory, the first printed table, is meant to test every feature added to MADARA. It's also documented and uses descriptive variable names so people can use these tests as guides for usage. Consequently, the commenting and blank line usage is similar to the main repo but slightly less because there isn't much to doxygen comment. Consequently, the comment/readability ratio is 1 line of readability for roughly every 3 lines of code.

The tutorials, however, are meant as guides for developers, and they are thoroughly documented to discuss the intent of features and proper usage. One user commented to me recently that they are likely reading technical papers because of how rich they are. This bears out in the cloc results. For every 1 line of code in the tutorials, there is at least a line of documentation or blank line for readability. In fact, there are more documentation/readability lines than code lines in the repository.

As weird as it may sound, I also know that this readability and documentation of tests and tutorials may not constitute even 1/5 of the battle for prime time readiness of a middleware. After all, only someone who has downloaded the repository (i.e. only someone convinced enough of the power and features of MADARA) would be able to see the tests and tutorials directories. No, I feel the main focus on documentation has to go into external guides that help developers understand just what they're getting into with a new middleware or library. So, presentations and external guides featuring code examples and descriptions have received a large deal of focus as well--though not necessarily enough.

The main point of entry for external guides of MADARA is now the Wiki section of the MADARA project site. The MADARA Wiki now defaults to a set of guides that discuss high level overviews of the MADARA architecture, interactions with the Knowledge Base, interactions with the Transport layer, and what the target audience for the middleware is. There are half a dozen images outlining interactions and overviews to aid developers in visualizing the system, Youtube code tutorials, and video of example usage in a swarm of commercial uavs--the latter two of which are available at the bottom of each Wiki page in the More Information section. These Wiki guides add an additional 1700 lines of effort to make MADARA more user-friendly and accessible.

So, what do you think? When you get a chance, check out the Wiki section of the MADARA project site and maybe the code tutorials on the left hand pane. Feel free to tell me what you think. Documentation is an ongoing process, and I welcome the feedback!

Friday, February 10, 2012

What is MADARA KATS?

MADARA is a focal point of my job talks, and KATS is one of the most exciting tools available in MADARA. In a nutshell, the KaRL Automated Testing Suite (KATS) is a portable deployment and testing system meant for automated sequencing or testing targeted at distributed, real-time and embedded systems.

What separates MADARA KATS from the rest of the pack is that the entire system is done in a decentralized way. This may not immediately seem important or interesting, but this opens up fine-grained control, support for fault tolerance, and responsiveness that can't be found anywhere else, especially in centralized solutions--e.g., solutions that use a centralized controller.

The features of KATS are itemized below:
  • Fully decentralized system that targets large scale testing across multiple machines in a local area network
  • Portable to most operating systems (Windows, Linux, Apple, etc.)
  • Control over launched application
    1. Executable, command line, environment variables and many other application inputs
    2. Kill time and signal (on Windows, only terminate is available)
    3. Real-time class for elevating process priority
  • Batch processing with parallel or sequential execution
  • XML configurable
  • Domain-specific modeling language available for modeling in GME
  • Ability to instrument Android smartphones via both Monkeyrunner (MADARA MAML library) and ADB (MADARA MAAL library)
  • 8-phase process lifecycle (See figure below for visual)
    1. Barrier (optional)--require that a group of processes come to a barrier before application launch
    2. Precondition (optional)--require that a condition is met before application launch (e.g., if another process succeeds or fails in one of its lifecycle phases)
    3. Temporal delay (optional)--operating system portable sleep time
    4. Post delay (optional)--set a global condition or perform logic that indicates you are past temporal delay phase
    5. Application launch--launch an application
    6. Post launch (optional)--set a global condition or perform logic that indicates your application has been launched.
    7. Post condition (optional)--set a global condition or perform logic based on the return value/exit code of your application.
    8. Exit
  • Built-in network transports for RTI DDS and Open Splice DDS. Other transports can be added via expansions to 2 functions in the Transport.h file
  • host agnostic--i.e., you can deploy whatever you want wherever you want due to the usage of anonymous publish/subscribe network transport layer.
  • fault tolerant--i.e., you can deploy multiple failover entities in case of faulty hardware or whatever might cause a process to fail. Additionally, you can create tests that detect and respond to faults/failures.
  • Nested tests and application launches
  • Microsecond precision between process lifecycle phases that are not dictated by blocking communication of a centralized controller.
MADARA KATS Process Lifecycle Now, the microsecond precision is important for DRE systems, especially in reproducing race conditions. With KATS, you can get the results of a postcondition in a failed application launch at the relevant precondition for another application launch within fractions of a second. With this open-source, freely available framework, you can perform black-box sequencing at scale.

Additionally, there are whitebox tools available to allow for distributed breakpoints within an application and powerful, thread-safe logging APIs in case those are needed. However, most people seem more interested in the blackbox testing tools.

If you have questions about the MADARA KATS system, feel free to contact me at jedmondson (at) gmail.com.

Tuesday, January 17, 2012

Performance Increase in MADARA KaRL

The new built-in features in KaRL have resulted in very noticeable performance increases. Below are the changes in performance. These are reported timing metrics from the test_reasoning_throughput test, available in the source code repo ran on a Intel Core Duo with 4 GB RAM, but only using ~330 KB of memory for the test.

BEFORE indicates timing values before providing compiled expressions that circumvented the std::map lookups for KaRL logics and the built-in changes to the variable indexing which were incurring the same type of std::map overhead with each variable lookup. The AFTER indicates timing values after providing the built-in variable lookups with constant time was implemented. The AFTER WITH COMPILED indicates the timing for both changes.

Execution times
BEFORE
for(1->10,000) ++var             997 ns
++var; x 10,000                  502 ns
for(1->10,000) true => ++var     1000 ns
true => ++var; x 10,000          597 ns
AFTER
for(1->10,000) ++var             642 ns
++var; x 10,000                  248 ns
for(1->10,000) true => ++var     637 ns
true => ++var; x 10,000          357 ns
AFTER WITH COMPILED
for(1->10,000) ++var             266 ns
++var; x 10,000                  169 ns
for(1->10,000) true => ++var     269 ns
true => ++var; x 10,000          196 ns

What does this mean to you as a developer?
It means you can develop C++ applications that link to our library and evaluate knowledge operations at around 6 mhz before disseminating your knowledge updates across the network using DDS or whatever transport you want in microseconds. It means that knowledge and reasoning can be included in online, mission-critical real-time systems, and you no longer have to use reasoning engines that take milliseconds to evaluate rules, limiting you to hz and not khz or mhz, in our case.

This engine was already increasing the state-of-the-art speeds for knowledge evaluation in real-time systems before these changes, but we also have plans for hopefully blowing this out of the water by using templates instead of virtual functions in our current expression tree formation. This will require either rolling over to the boost::spirit template meta programming lexical parser approach or rolling our own. I'll keep you posted. Right now, updates to CID and KATS for automated, adaptive deployments are taking priority.

Tuesday, January 10, 2012

New Features in MADARA KaRL

The MADARA Knowledge and Reasoning Language (KaRL) has undergone some major changes recently that should provide developers with a faster, more flexible reasoning engine. In this post, we’ll outline features like explicit compilation of KaRL logics and implicit compilation of variable references, and the timed wait operation. Along the way, we'll show how to use the atomic pre- and post- prints for evaluations or wait statements.

Originally, the KaRL engine created an expression tree and then cached the expression tree in an STL string to expression tree map. This feature still exists, but we noticed that the string lookups were taking quite a bit of time. In the worst case, such string lookups can take O(m log n), where m is the length of the string and n is the number of compiled logics. This is quite a long time to grab a cached tree.

The same search complexity was limiting the execution of our KaRL interpreter logic as well. With each variable lookup, we perform a lookup in an STL string to long long tree map. Depending on the length of the variable and the number of variables, this could again take a while.

Not anymore.

Developers may now compile KaRL logics directly with a call to the compile function, the result of which can be used to directly reference the expression tree. Additionally, underneath the hood, we have rewritten the variable node in the expression tree so that it directly manipulates the underlying Knowledge Record in the Thread Safe Context (and does so without entering or leaving the mutex). This resulted in increasing the speed of the engine by a factor of 3-4x, depending on how the logics were being processed. Keep in mind that this speed up factor was achieved on an already state-of-the-art reasoned that was capable of 2 million knowledge operations per second (~500 ns per operation).

When using a C++ for loop to call the reasoning engine, these changes improved our performance from ~1us per operation to ~250ns. Larger logics, where internal optimizations are possible, have been improved from ~500ns per operation to ~190ns. This means that the KaRL engine can now processes knowledge operations at over 5mhz—5 million operations per second.

The implicit compilation is included in all knowledge calls, but the explicit compilation can be done via the following:


// Initiate knowledge base with no transport
Madara::Knowledge_Engine::Knowledge_Base knowledge;

// new classes for evaluation settings and compiled expressions
Madara::Knowledge_Engine::Eval_Settings settings;
Madara::Knowledge_Engine::Compiled_Expression compiled;

// compile the expression and save it into compiled
compiled = knowledge.compile ("invariant => (++.count ; someother.condition => status = 5)");

// evaluate the expression with the default settings
knowledge.evaluate (compiled, settings);



You can see other examples of using these new features in the test for reasoning throughput.

We’ve also added the ability to do timed waits instead of indefinite blocking waits on knowledge expressions. This allows for a calling C++ program to wait for a specific time interval for the knowledge expression or KaRL logic to become non-zero, and if the time interval passes, returning control back to the caller. The underlying mechanisms are similar. The KaRL engine aggregates any changes to variables within the logic evaluation and sends updates to other interested network entities over the DDS transport.

You can find examples of how to use this in the timed wait tests. I include an example below:


// Initiate knowledge base with no transport
Madara::Knowledge_Engine::Knowledge_Base knowledge;

// new classes for wait settings and compiled expressions
Madara::Knowledge_Engine::Compiled_Expression compiled;
Madara::Knowledge_Engine::Wait_Settings wait_settings;

// simple expression that will always evaluate to zero
std::string logic = "++.count && 0";

// set the wait settings to a polling frequency of once
// a millisecond and a maximum wait time of 10 seconds
wait_settings.poll_frequency = .001;
wait_settings.max_wait_time = 10.0;

// create atomic pre and post print statement
wait_settings.pre_print_statement =
"WAIT STARTED: Waiting for 10 seconds.\n";
wait_settings.post_print_statement =
"WAIT ENDED: Number of executed waits was {.count}.\n";

// compile the simple zero logic
compiled= knowledge.compile (logic);

// wait on the expression with the timed wait semantics
knowledge.wait (compiled, wait_settings);


The implications of the time-based waiting mechanism are pretty big, and these changes will eventually make their way into the KATS framework to allow for even more flexibility with automated tests and deployments in the form of fail and success condition executions of deployment elements. Combined with the new redeployment framework changes, the MADARA suite of tools should help a lot of distributed, real-time and embedded developers better reach their project goals. If you have any questions or comments about the implementations of these features or how you can use them in your projects, please let me know. MADARA is completely open source under a BSD license.

Friday, August 12, 2011

Android Performance Testing

1. Intro

I'm building a distributed testing infrastructure on top of KATS for a DARPA project, and we had a need for performance monitoring of CPU, memory, and process profiling information (including context switches) throughout a test run. MADARA already has a library called MAML which allows for quick Python script development to instrument a phone via the Android Monkeyrunner tool, but Monkeyrunner doesn't really provide for performance profiling. So, how do you quickly and easily retrieve a summary of CPU and memory utilization on your Android phone?

2. Solution

The information is available through several utilities in the Android Debug Bridge including top and the varied information stored inside of the /proc directory. What I've done is make these more accessible through additions to the open-source maml.py library and the new maal.py (Madara Android ADB Library) which does not require Monkeyrunner at all.

The MAAL provides much of the same functionality that MAML does, but is much slower with keyevents (I will fix this by reusing the same shell session, but it isn't a priority right now). MAAL and MAML also have a new library function called print_device_stats which allows for printing both a long form and a one line summary for CPU and memory usage.

Three scripts have also been added to utilise these libraries and provide general-purpose testing information for Android smartphone programmers. For example, the following is a detailed view of the current memory and CPU usage on a Motorola Droid in our lab:


2.1. maal_monitor.py

The command line arguments for maal_monitor.py are available by passing -h or --help to the script. The following script execution monitors performance for 1 iteration (-n 1) and prints the top 10 cpu-intensive processes running on the phone.


./maal_monitor.py -p 10 -n 1
Memory: 5892 kB free of 230852 kB
User 4%, System 6%, IOW 0%, IRQ 0%
User 15 + Nice 0 + Sys 21 + Idle 273 + IOW 0 + IRQ 0 + SIRQ 0 = 309

  PID CPU% S  #THR     VSS     RSS PCY UID      Name
 1021   4% S    57 215056K  56096K  fg system   system_server
30994   3% S    20 140436K  24940K  bg app_24   edu.vu.isis.ammo.spotreport
23827   2% R     1    876K    392K  fg shell    top
  177   0% S     1      0K      0K  fg root     omap2_mcspi
    5   0% S     1      0K      0K  fg root     events/0
  995   0% S     2   1272K    128K  fg compass  /system/bin/akmd2
 1053   0% S     1      0K      0K  fg root     tiwlan_wq
  160   0% S     1      0K      0K  fg root     cqueue
  180   0% S     1      0K      0K  fg root     cpcap_irq/0
  238   0% S     1      0K      0K  fg root     ksuspend_usbd


The summarized view looks like this for maal_monitor.py:


./maal_monitor.py -p 10 -n 1 -s
Memory: 5952 kB free of 230852 kB. CPU: Total 6% (User: 2% Sys: 4%)


Maal_monitor.py is great for taking periodic measurements, but this may be too coarse-grained and may be too inaccurate with regards to CPU utilisation for your testing needs (with maal_monitor.py, we're essentially polling every five seconds for current utilization, which is approximated). If you need hard numbers for cpu usage, context switches, number of processes launched, etc., I provide a separate set of scripts.



2.2. maal_proc_stats.py and maal_stats_cmp.py

These scripts are generally used in the following way: 1) call maal_proc_stats.py with an outfile location to your storage drive, 2) run your test, 3) call maal_proc_stats.py with an outfile location that is different from #1, and 4) call maal_stats_cmp.py on the two files created in #1 and #3 to get the performance difference for your test.

The output for maal_proc_stats.py will look like the following:


./maal_proc_stats.py -s
Clockticks: User 5451112. System 5357590. IO 566. 
Processes: Num 56061. Switches 812051296.


The first line shows the clock ticks spent in user processes, system processes and dispatching IO (it does not count idle ticks). The second line shows the number of processes that have been launched since phone boot, and the number of context switches since boot.

Now, after creating two files with this script according to the process above noted in #1 and #3, you can process the performance information that changed during the test by running the following:


./maal_stats_cmp.py --infile1 first.stats --infile2 second.stats
Clockticks: User 629. System 393. IO 0. 
Processes: Num 23. Switches 35566.


Not only can you use maal_stats_cmp.py on the output from maal_proc_stats.py, but you can also process the difference between copies of the /proc/stats file (you just need to adb pull these files to your computer or something, if that's what you would like to do).

With these numbers in place, you should be able to configure systems like BuildBot or your favorite scoreboarding system to display test results via threshold values based on known good test run resource usages. If changes have caused huge CPU or memory spikes, they should show up in one or both of these MAAL performance logging methodologies.

3. Downloads

MAAL and associated scripts

MAML and any open-source scripts
MADARA KATS for synchronizing and coordinating testing processes

Wednesday, June 1, 2011

Android Monkeyrunner and the Google ADB: a lament

Intro

So, for the past couple of months, I've been trying to get Android Monkeyrunner to cooperate for distributed automated testing, but it has been an uphill battle... against an entrenched army of monkeys armed with bazookas. I wanted the Monkeyrunner library to work well, but I get the feeling that Monkeyrunner has not been tested or used much.

The honeymoon

My experience with Monkeyrunner a month or two ago didn't start out all bad. The Monkeyrunner press function works much faster than doing "adb shell input keyevent" calls (likely due to a new shell being launched with every invocation and no option to chain together a long string of keyevents in the same session), and I got a glimpse of how easy smart phone automation could be without writing customized Java Unit tests or installing Robotium on the phone. I could just send KeyEvents to the phone, type a string, and even connect 2 to 8 phones to our servers and launch MonkeyRunner tests in parallel (more on problems with this later). With Monkeyrunner, I could instruct a non-technical person on how to write a test based simply on how they would use a directional pad and keyboard to navigate around the activity.

Aaaaaand we don't even cuddle anymore

The first problem with MonkeyRunner for me came in the form of the type function being broken when the space key is used. This is not unique to Monkeyrunner. It appears that adb shell input text suffers from a similar problem. There may be several other KeyEvents (other than spaces) that fall into this particular hazard, but I was able to get around the issue for now by removing spaces from the text to be sent and inserting KEYCODE_SPACE where appropriate.

There were a couple of other problems with MonkeyRunner that kept cropping up. First, there is very little support for debugging the state of the activity you are trying to instrument.

You can't even get information on whether or not the activity has crashed without going back to adb and logcat. You can't form KeyEvent pairings that select an entire EditText without long clicks, but long clicks are hard to emulate when the EditText could be in a different location on the screen due to portrait or landscape modes, or even because the screen resolution is different between two phones.

You can't press two buttons at once because the DOWN type in the press method is apparently mapped directly to DOWN_AND_UP. Basically, the shift is unpressed immediately after you get out of the press function, regardless of what you pass it. This caused some headaches when trying to select all text, but it was manageable. No automation killer problem found yet... until Tuesday...

Monkeyrunner is a racist... that's a software library that causes race conditions, right?

On Tuesday came the worst problem, which drove me to try to rewrite the Monkeyrunner library without modifying the Android Debug Bridge. There is a race condition in the MonkeyRunner WaitForConnection method that occurs when you try to wait for multiple phones at once (even from separate heavy weight processes). The only way to really witness this issue is when you have an automated system trying to launch activities on 2 to 8 phones at once (humans take milliseconds or seconds to launch each by hand, so the race condition is hard for a manual tester to catch). The WaitForConnection method will cause random behavior on one of the phones while opening the other one without a problem for a moment. Then the automation on all phones halts. The issue is very weird.

We got around this for a short term fix by ensuring that we always waited 1 second after the previous phone launched before starting its automation (via the KATS process life cycle). While this works, it is not ideal. We wanted to launch 2-8 phones at once per server (as many USB connections as we can do right now) and see if there were any race conditions involving the phones connecting or disseminating to the server. With this race condition in Monkeyrunner and our subsequent fix of trying to sleep in between each phone launch, it's likely that the phones will have 1 second of difference between sending, which means we can't test everything that we what we want to test.

What's most frustrating about this is that the problem is not on my end, and I can't seem to find any fix to this without modifying the Android code base.

Monkeyrunner withdrawal

To try to address the issue, I rewrote my entire Python scripting library which wrapped Monkeyrunner to instead use nothing but ADB under the hood. It started out promising. First, the adb equivalent of WaitForConnection was much, much faster (basically, I just used adb get-state. The WaitForConnection method must be establishing an actual session with the phone, and this is probably where the race condition is occurring (during the session creation, which is almost certainly not thread safe). So, far so good. Actually, the entire library was a breeze to write.

Then I run it... and the adb shell input keyevent command inserts those 1 second delays in between every KEYCODE_DPAD_LEFT, backspace, menu, etc. A 15 second MonkeyRunner test is extended to hundreds of seconds when using adb shell input keyevent. The culprit with the adb shell is probably that a separate shell session is started with each invocation - rather than queuing the events to the target phone and returning immediately. I can understand this not being the default behavior, but I can't really understand why an asynchronous or a queuing version isn't available.

A lamentable conclusion

Being able to send KeyEvents to an Android phone is pretty awesome. I hope that the Google folks either fix the race conditions in MonkeyRunner or they fix the delays in adb shell so we can send KeyEvents at decent speeds. For the moment though, this is my Monkeyrunner sad face :(

Library files

The wrappers around the Monkeyrunner and ADB interfaces are linked below. The library is called the MADARA Android Monkeyrunner Library (MAML).

MAML sans Monkeyrunner
MAML original