Tuesday, January 17, 2012

Performance Increase in MADARA KaRL

The new built-in features in KaRL have resulted in very noticeable performance increases. Below are the changes in performance. These are reported timing metrics from the test_reasoning_throughput test, available in the source code repo ran on a Intel Core Duo with 4 GB RAM, but only using ~330 KB of memory for the test.

BEFORE indicates timing values before providing compiled expressions that circumvented the std::map lookups for KaRL logics and the built-in changes to the variable indexing which were incurring the same type of std::map overhead with each variable lookup. The AFTER indicates timing values after providing the built-in variable lookups with constant time was implemented. The AFTER WITH COMPILED indicates the timing for both changes.

Execution times
BEFORE
for(1->10,000) ++var             997 ns
++var; x 10,000                  502 ns
for(1->10,000) true => ++var     1000 ns
true => ++var; x 10,000          597 ns
AFTER
for(1->10,000) ++var             642 ns
++var; x 10,000                  248 ns
for(1->10,000) true => ++var     637 ns
true => ++var; x 10,000          357 ns
AFTER WITH COMPILED
for(1->10,000) ++var             266 ns
++var; x 10,000                  169 ns
for(1->10,000) true => ++var     269 ns
true => ++var; x 10,000          196 ns

What does this mean to you as a developer?
It means you can develop C++ applications that link to our library and evaluate knowledge operations at around 6 mhz before disseminating your knowledge updates across the network using DDS or whatever transport you want in microseconds. It means that knowledge and reasoning can be included in online, mission-critical real-time systems, and you no longer have to use reasoning engines that take milliseconds to evaluate rules, limiting you to hz and not khz or mhz, in our case.

This engine was already increasing the state-of-the-art speeds for knowledge evaluation in real-time systems before these changes, but we also have plans for hopefully blowing this out of the water by using templates instead of virtual functions in our current expression tree formation. This will require either rolling over to the boost::spirit template meta programming lexical parser approach or rolling our own. I'll keep you posted. Right now, updates to CID and KATS for automated, adaptive deployments are taking priority.

No comments:

Post a Comment