Saturday, September 24, 2011

New Process Synchronization Mechanism in the Simulation Collection

I have added a new process synchronization / communication mechanism to the simulation collection - similar to the Ada rendezvous mechanism. This adds call and accept syntaxes that allow processes to communicate in a synchronized manner. The call syntax allows the caller process to send a request to the callee process. The call syntax has the form:

  (call callee (id . arguments))

Where callee is the receiving process, id is the request type (which is not evaluated), and arguments are the arguments for the request. With a simple call, the caller is blocked until the callee accepts the request.

The accept syntax allows the callee process to accept a request from a caller process. The accept syntax has the form:


  (accept caller (id . parameters) . body)

Where caller is the calling process, id is the request type (which is not evaluated), parameters are the formal parameters for the request, and body is the body of the accept - i.e., the critical section. With a simple accept, the callee is blocked until a caller sends a matching request. When a matching request is accepted, a rendezvous occurs - that is the body is evaluated and its result returned to caller as the result of the matching call.


Each process maintains a queue of requests that have yet to be accepted - where each caller is blocked.


Even with simple call / accept we can implement a simple lock process.


(define-process (lock)
  (let loop ()
    (accept caller (lock))
    (accept caller (unlock))
    (loop)))

This process will continuously loop waiting for a process to request a lock and then wait for a process to request an unlock. Here is an example process that uses the lock.


(define-process (p1 i a-lock)
  (printf "~a: process p1(~a) started.~n"
          (current-simulation-time) i)
  (call a-lock (lock))
  (printf "~a: process p1(~a) acquired lock.~n"
          (current-simulation-time) i)
  (wait (random-flat 0.0 10.0))
  (printf "~a: process p1(~a) releasing lock.~n"
          (current-simulation-time) i)
  (call a-lock (unlock)))



This process just requests a lock - blocking until it gets it, waits a random length of time to simulate some protected task, and requests an unlock.


And here is a main procedure to start everything.


(define (main n)
  (with-new-simulation-environment
    (let ((a-lock (schedule #:now (lock))))
      (for ((i (in-range n)))
        (schedule #:at (random-flat 0.0 10.0) (p1 i a-lock)))
      (start-simulation))))

If you run this with:


(main 10)


You get the following output:


0.13863292728449428: process p1(5) started.
0.13863292728449428: process p1(5) acquired lock.
1.1536616785362432: process p1(7) started.
2.0751959904191937: process p1(8) started.
2.876463473845367: process p1(1) started.
3.344929657351545: process p1(4) started.
7.029086638253653: process p1(5) releasing lock.
7.029086638253653: process p1(7) acquired lock.
7.342236104231587: process p1(2) started.
7.824845469456133: process p1(9) started.
7.921942925957062: process p1(6) started.
8.162050798467028: process p1(3) started.
8.574025375628212: process p1(0) started.
8.624836102585753: process p1(7) releasing lock.
8.624836102585753: process p1(8) acquired lock.
16.152957766273808: process p1(8) releasing lock.
16.152957766273808: process p1(1) acquired lock.
18.243251499858758: process p1(1) releasing lock.
18.243251499858758: process p1(4) acquired lock.
22.293434873464157: process p1(4) releasing lock.
22.293434873464157: process p1(2) acquired lock.
27.693559748646905: process p1(2) releasing lock.
27.693559748646905: process p1(9) acquired lock.
32.4025230155617: process p1(9) releasing lock.
32.4025230155617: process p1(6) acquired lock.
39.54837751017477: process p1(6) releasing lock.
39.54837751017477: process p1(3) acquired lock.
39.86720234676685: process p1(3) releasing lock.
39.86720234676685: process p1(0) acquired lock.
44.316970186291684: process p1(0) releasing lock.

However, this is a (very) fragile implementation of a lock relying on well-behaved callers.


A more robust implementation of a lock requires greater control over the acceptance of requests. The select syntax - in this case for accepts - plus some internal state variables allows us to do this. The select (for accepts) syntax has the following form.


  (select
    accept-alternative
    ...
    [else-alternative])

  accept-alternative
    = ((when expr
        (accept caller (id . parameters) . body1)
          . body2)
    | ((accept caller (id . parameters) . body1)
         . body2)

  else-alternative
    = (else timing . body)

  timing
    = #:now
    | #:at time
    | #:in delta
    | #:when event


Which looks a bit daunting, but basically it is a list of possibly guarded accepts with an optional else. The select will evaluate all of the guards - the when exprs - to determine the open accepts. Unguarded accepts are always open. If any request form an open accept are queued, the highest priority request is selected and a rendezvous occurs. The body2 expressions are evaluated after the rendezvous. The else alternative specified a time-out and is evaluated (in lieu of a rendezvous) if no request is accepted before the specified time.


Here is a better implementation of a lock process.


(define-process (lock)
  (let ((process #f)
        (locked? #f))
    (let loop ()
      (select
        ((when (not locked?)
           (accept caller (lock)
             (set! process caller)))
         (set! locked? #t))
        ((accept caller (unlock)
           (unless (eq? caller process)
             (error 'unlock
                    "process does not have the lock"
                    caller)))
         (set! process #f)
         (set! locked? #f)))
      (loop))))


This uses two variables, process and locked?, to maintain the internal state of the lock. The lock process continuously loops processing lock or unlock requests. A lock request will only be accepted if the lock is not currently locked. An unlock request will be accepted any time, but will raise an error if the process requesting the unlock is not the process that has the lock.


Running this lock process with the process p1 and main above gives identical results.


In some cases, we don't know if we can fully process a request until we get the request - because it depends on the caller or arguments in the request. What of we wanted to extent the lock to allow nested lock/unlock pairs from a process. In that case, we want to accept lock requests from the process owning the lock at any time, but not from others. There isn't any way to do that with a guard. Instead, we use requeue.


(define-process (lock)
  (let ((process #f)
        (count 0))
    (let loop ()
      (select
        ((accept caller (lock)
           (if process
               (if (eq? caller process)
                   (set! count (+ count 1))
                   (requeue))
               (begin
                 (set! process caller)
                 (set! count 1)))))
        ((accept caller (unlock)
           (if (eq? caller process)
               (begin
                 (set! count (- count 1))
                 (when (= count 0)
                   (set! process #f)))
               (error 'unlock
                      "process does not have the lock"
                      caller)))))
      (loop))))

Here we have a count variable instead of a simple Boolean locked? variable. - when count = 0 the lock is open to anyone, but only open open to the process current 'owning' the lock when count > 0. In this case, we remove the guard from the accept for lock and requeue the request if the caller is not the process currently owning the lock.


I believe this is a very robust interprocess synchronization / communications mechanism for the simulation collection. I have a complete knowledge-based simulation example I will be putting up in the near future.


This code (and many examples) are included in the development branch of the simulation collection on the Schematic project at Sourceforge. I will update the PLaneT package when I update the documentation.

Labels:

Saturday, August 14, 2010

Simple Simulation in Racket

I updated the simple simulation example from the simulation collection to Racket. The simple simulation example shows how a basic discrete-event simulation engine can be written in pure Racket (specifically, the racket/base language). Fully commented it is only about two and a half pages of code and does not contain any references to external packages. If you're curious about the usage of continuations, this might be a good example to look at. [If you're an expert in continuations, any critique would be welcome - particularly on the event loop in the start-simulation procedure.]

The code uses parameters for its global variables, which may be unfamiliar to some people. For example, the simulation clock is maintained by the current-time parameter. This parameter is created by (make-parameter current-time 0.0). A call to (current-time) will return its current value. The call (current-time (event-time (current-event))) in start-simulation sets its value to the time of the next event. Finally, the parameterize call in run-simulation creates new instances of the parameters and (I think) is cleaner than just resetting their values.

And, here is the actual code.


#lang racket/base
;;; Simplified Simulation System

;;; Global simulation control variables

;;; future-event-list : (parameter/c (list? event?))
;;; current-time : (parameter/c real?)
;;; current-event : (parameter/c (or/c false/c event?))
;;; event-loop-exit : (parameter/c (or/c false/c continuation?))
;;; event-loop-next : (parameter/c (or/c false/c continuation?))
(define future-event-list (make-parameter '()))
(define current-time (make-parameter 0.0))
(define current-event (make-parameter #f))
(define event-loop-exit (make-parameter #f))
(define event-loop-next (make-parameter #f))

;;; Event definition and scheduling

;;; (struct event (time function arguments))
;;; time : (>=/c 0.0)
;;; function : (or/c false/c procedure?)
;;; arguments : list?
;;; Each event has a time the event is to be executed, the function to
;;; be executed, and the (evaluated) arguments to the function.
(struct event (time function arguments))

;;; (schedule event) -> any
;;; event : event?
;;; Add an event to the event list.
(define (schedule event)
(future-event-list (event-schedule event (future-event-list))))

;;; (event-schedule event event-list) -> (list-of event?)
;;; event : event?
;;; event-list : (list-of event?)
;;; Return a new list of events corresponding to the given event added
;;; to the given list of events.
(define (event-schedule event event-list)
(cond ((null? event-list)
(list event))
((< (event-time event)
(event-time (car event-list)))
(cons event event-list))
(else
(cons (car event-list)
(event-schedule event (cdr event-list))))))
;;; Simulation control routines

;;; (wait/work delay) -> any
;;; delay : (>=/c 0.0)
;;; Simulate the delay while work is being done. Add an event to
;;; execute the current continuation to the event list.
(define (wait/work delay)
(let/cc continue
;; Add an event to execute the current continuation
(schedule (event (+ (current-time) delay) continue '()))
;; Return to the main loop
((event-loop-next))))

;;; (start-simulation) -> any
;;; This is the main simulation loop. As long as there are events to
;;; be executed (or until the simulation is explicitly stopped), remove
;;; the next event from the event list, advance the clock to the time
;;; of the event, and apply the event's functions to its arguments.
(define (start-simulation)
(let/ec exit
;; Save the event loop exit continuation
(event-loop-exit exit)
;; Event loop
(let loop ()
;; Exit if no more events
(when (null? (future-event-list))
((event-loop-exit)))
(let/cc next
;; Save the event loop next continuation
(event-loop-next next)
;; Execute the next event
(current-event (car (future-event-list)))
(future-event-list (cdr (future-event-list)))
(current-time (event-time (current-event)))
(apply (event-function (current-event))
(event-arguments (current-event))))
(loop))))

;;; (stop-simulation) -> any
;;; Stop the execution of the current simulation (by jumping to its
;;; exit continuation).
(define (stop-simulation)
((event-loop-exit)))

;;; Random Distributions (to remove external dependencies)

;;; (random-flat a b) -> inexact-real?
;;; a : real?
;;; b : real?
;;; Returns a random real number from a uniform distribution between a
;;; and b.
(define (random-flat a b)
(+ a (* (random) (- b a))))

;;; (random-exponential mu) -> inexact-real?
;;; mu : real?
;;; Returns a random real number from an exponential distribution with
;;; mean mu.
(define (random-exponential mu)
(* (- mu) (log (random))))

;;; Example Simulation Model

;;; (generator n) -> any
;;; n : exact-positive-integer?
;;; Process to generate n customers arriving into the system.
(define (generator n)
(do ((i 0 (+ i 1)))
((= i n) (void))
(wait/work (random-exponential 4.0))
(schedule (event (current-time) customer (list i)))))

;;; (customer i) -> any
;;; i : exact-nonnegative-integer?
;;; The ith customer into the system. The customer is in the system
;;; 2 to 10 minutes and then leaves.
(define (customer i)
(printf "~a: customer ~a enters~n" (current-time) i)
(wait/work (random-flat 2.0 10.0))
(printf "~a: customer ~a leaves~n" (current-time) i))

;;; (run-simulation n) -> any
;;; n : exact-positive-integer?
;;; Run the simulation for n customers (or until explicitly stopped at
;;; some specified time).
(define (run-simulation n)
;; Create new global values
(parameterize ((future-event-list '())
(current-time 0.0)
(current-event #f)
(event-loop-exit #f)
(event-loop-next #f))
;; Schedule the customer generator
(schedule (event 0.0 generator (list n)))
;; Stop the simulation at the specified time (optional)
;(schedule (event 50.0 stop-simulation '()))
;; Start the simulation main loop
(start-simulation)))

;;; Run the simulation for 10 customers.
(run-simulation 10)


An example of the output looks like:


2.3985738870908615: customer 0 enters
5.395876334125138: customer 1 enters
7.716207787604494: customer 2 enters
8.062394508850671: customer 1 leaves
9.204092461998275: customer 0 leaves
14.431739507072376: customer 3 enters
15.15611565215575: customer 2 leaves
19.107487138771972: customer 4 enters
19.67563344212178: customer 3 leaves
26.51109574171283: customer 5 enters
27.060305975366514: customer 4 leaves
31.04777263526701: customer 6 enters
32.83444054122948: customer 5 leaves
36.31758748274069: customer 7 enters
39.27687483880874: customer 6 leaves
40.643350655011744: customer 8 enters
41.76876758081738: customer 7 leaves
42.82235216823591: customer 8 leaves
47.63479457664656: customer 9 enters
57.31661407095231: customer 9 leaves


Your output will vary slightly because of the random numbers. That is, this is a stochastic model.

Friday, August 13, 2010

Status Update

I haven't posted for a while, so I just wanted to provide an update on the status and (hopeful) direction of my various PLaneT packages. This post will cover the science, simulation, and inference collections, which work together to provide a knowledge-based simulation capability.

We at SET Corporation are using the science, simulation, and inference collections as part of our Multi-Agent, Dynamic NEtwork Simulation System (MADNESS). The inference collection is also used in our COntent GENeration from Templates (COGENT) system. These are used on a daily basis in our agent-based simulation work.

Science Collection

The current release of the science collection is version 3.10. I only expect to do bug fixes to the version 3 code. The next release should be version 4.0. For version 4.0 I plan to convert the code to Typed Racket, which has gotten to the point that typed code generally runs as fast or faster than my hand-optimized numeric code.

The only major new functionality planned for version 4.0 are fast Fourier transforms. This code is already written and ready for release, but the documentation has not been updated. Vincent St-Amour at Northeastern University converted the FFT code to Typed Racket to test the optimized code generation for numeric processing. The results were extremely impressive and I need to make a separate post on this.

Simulation Collection

The current version of the simulation collection is version 3.4. I only expect to do bug fixes to the version 3 code. At some point I will release a version 4.0, which may (or may not) be in Typed Racket. The decision to use Typed Racket will be based on my experience in converting the science collection. I will also rewrite the simulation language using syntax-parse to provide better syntax error handling.

Inference Collection

The current version of the inference collection is version 2.7. There was a flurry of releases earlier this year as we really began to use the inference collection in MADNESS and COGENT. Unfortunately, the documentation has not yet caught up with the code changes.

I made earlier posts on the upcoming version 3 rewrite. As with the simulation collection, this may (or may not) be in Typed Scheme. I will definitely use syntax-parse to implement a rule compiler for the rule language.

Saturday, September 26, 2009

Getting Ready for Inference Collection V3.0 Update 1

One of the major changes for version 3.0 of the inference collection is a rule compiler that generates code for (portions of) the match/join process. This will be done at expansion (compile) time by the rule language macros themselves. The rule language macro include define-ruleset, define-facts, and define-rule. The rule compilation itself takes place in the define-rule macro.

So far, I have the rule parsing, rule validation, and rule normalization phases pretty much complete. The rule normalization phase converts Boolean expressions within the pattern clauses on a standard (and <pattern-clause> ...) | (or (and <pattern-clause> ...) ...). The first form is a simple rule while the second is a sequence of simple rules. For example:

(define-rule (some-rule some-ruleset)
(p1 ?a ?b ?c)
(p2 ?a ?b ?c)
(or (and (p3 ?a)
(p4 ?b))
(p3 ?c))
==>
(printf "a = ~a, b = ~a, c = ~a~n" ?a ?b ?c))

would expand into the equivalent of the two rules:

(define-rule (some-rule-a some-ruleset)
(p1 ?a ?b ?c)
(p2 ?a ?b ?c)
(p3 ?a)
(p4 ?b)
==>
(printf "a = ~a, b = ~a, c = ~a~n" ?a ?b ?c))

(define-rule (some-rule-b some-ruleset)
(p1 ?a ?b ?c)
(p2 ?a ?b ?c)
(p3 ?c))
==>
(printf "a = ~a, b = ~a, c = ~a~n" ?a ?b ?c))

This can simplify writing complex rulesets.

I am now coding the actual rule compiler. This will generate procedures for matching pattern elements and joining matches - applying join constraints. For example, the pattern

(p1 ?a 12 ?b : (> ?b 10) (c . ?c))

would compile into:

(p1
(#:variable #f ?a #f (lambda (?a bindings)
(vector-immutable
(unsafe-vector-ref bindings 0))))
(#:literal #f #f 12 (lambda (x bindings)
(if (= x 12) bindings #f)))
(#:variable #f ?b (> ?b 10) (lambda (?b bindings)
(let ((?a (unsafe-vector-ref bindings 1)))
(if (> ?b 10)
(vector-immutable
(unsafe-vector-ref bindings 0) ?a ?b)
#f))))
(#:variable c ?c #f (lambda (?c bindings)
(let ((?a (unsafe-vector-ref bindings 1))
(?b (unsafe-vector-ref bindings 2)))
(vector-immutable
(unsafe-vector-ref bindings 0) ?a ?b ?c))))))

where each pattern element is a five element list (<type> <key> <variable> <constraint> <matcher>), where <type> is the pattern element type (e.g., #:variable or #:literal), <key> is the key for association list matching, <variable> is a variable name, <constraint> is a constraint expression, and <matcher> is a procedure to match the pattern element. These are used at execution time to build the rule network.

The structure of the pattern is maintained in the compiled form and is used to structure the match nodes and the data flow between them. The <type>, <key>, <variable>, and <constraint> fields are used to determine when nodes can be shared among rules. Finally, the procedure in the <matcher> field does the actual matching and variable binding.

Data Structures for Matching and Joining

Immutable vectors are used to represent matches. For example, the pattern

(p1 ?a 12 ?b : (> ?b 10) (c . ?c))

matched against the asserted fact

(p1 10 12 14 (c . 16) (d . 18))

would produce a match of

#(<assertion> 10 14 16)

where <assertion> is the assertion object for the specified fact and 10, 14, and 16 are the values matching the variables ?a, ?b, and ?c, respectively.

Immutable vectors are also used to represent joined matches. For example, the patterns

(p1 ?a ?b ?c)
?p2 <- (p2 ?a ?b ?c)
(no (p3 ?a))

matched against the asserted facts

(p1 1 2 3)
(p2 1 2 3)

with no asserted (p3 1)) fact, would produce a match of

#(#(<assertion> 1 2 3)
#(<assertion> 1 2 3)
#(#t))


A major bottleneck in the current inference engine is the amount of work done in propagating (or unpropagating) assertions and deletions through the join (and rule) nodes of the rule network. For version 3.0, I'm planning on two (eq?) hash tables for each join node to index left and right matches and a double doubly-linked list of joined matches. This will allow efficient insertion and deletion of both left and right matches during join processing (for propagating and unpropagating).

More to come.

Labels:

Sunday, September 20, 2009

Science Collection Updated

I released a new version of the Science Collection with reimplemented statistics and Chebyshev approximation routines.

Statistics

The statistics routines have been rewritten to take sequences (for example, lists and vectors) as arguments instead of just vectors - except for the median and quantile routines that require sorted vectors. Unfortunately, the sequence versions, which are generally implemented using for/fold, were more than twice as slow as the previous vector versions. So, I wrote dispatching for and for/fold macros (see below) that take a general sequence and generate a cond expression with special cases for vectors and lists that the complier can better optimize. These perform about the same as the old code for vectors and lists perform about the same (as vectors). And, we still get the benefit of general sequences.

I also added the correlation function that I had somehow missed in the original implementation. [Unfortunately, the first version had a bug (#203) in it - actually, there were two since the contract was also wrong. It has since been fixed.]

I also added running statistics to the statistics routines. If you only need simple statistics (e.g., n, min, max, mean, variance, and standard deviation), you don't need to maintain a list (or vector) of the values. Instead, you create a statistics object and tally successive values. The statistics object can be queried at any time for the values of its statistics at that point.

I also implemented a mean-variance function that computes both the mean and variance for a sequence in a single pass through the data. [The previous functions requires two passes - one to compute the mean and one to compute the variance.] This also reduces the number of passes required for computing skew and kurtosis when the mean and standard deviation aren't given.

I also implemented contracts for sequence-of-real? and nonempty-sequence-of-real? that are used in the contracts for the statistics routines. As usual in the Science Collection, there are unchecked versions of the statistics routines that bypass the contract check.

Chebyshev Approximations

The Chebyshev approximation routines were reimplemented for clarity and efficiency. I changed the make-chebyshev-series function to be more general. It replaces the functionality of make-chebyshev-series-order and chebyshev-series-init, which are still available as legacy routines. I also added make-chebyshev-series-derivative and make-chebyshev-series-integral that generate new series to compute the derivative and integral, respectively, of a given Chebyshev series.

Dispatching for and for/fold

In reimplementing the statistics routines to allow general sequences as arguments, I also implemented dispatch-for and dispatch-for/fold macros, as described above. These macro are not exported from the statistics collection and are implemented using syntax-rules - so error messages are pretty much limited to 'bad-syntax'. With that caveat, they do produce code that outperforms (due to compiler optimizations) a straight for or for/fold for vectors and lists.

The implementation of the macros and examples their usage can be seen in the statistics.ss file in the Science Collection.

Labels: , ,

Saturday, August 08, 2009

Getting Ready for Inference Collection 3.0

I've been working on improvements to the inference collection now that we are actually using it (and the simulation collection) on a regular basis for agent-based simulation work. There will be in Version 3.0 of the inference collection. This is just to put forward some of the changes I'm planning on including.
The major change affecting user code is that Version 3.0 is that only (immutable) lists will be used to represent facts. This includes association lists for named slots. [Previously, vectors and structures could also be used.] Immutability simplifies many things associates with truth maintenance and explanation.
An ontology capability will be added that allows the user to define a set of taxonomies for the facts in the knowledge base. [Actually, this capability is already in Version 2.2 of the inference collection, but isn't released to PLaneT yet.] Taxonomic relations are asserted (much the same as facts) and these relations are used in matching rule preconditions. For example, (assert '(#:class hunter ())) and (assert '(#:class duck-hunter (hunter))) specifies that facts about hunters also apply to duck hunters. Classes may be generalizations of multiple parent classes. Basically, when a fact is asserted, it matched precondition clauses for its own class and (recursively) its parent classes. So, (duck-hunter clyde) would match both (duck-hunter ?x) and (hunter ?x) patterns as preconditions in rules.
Backward chaining is better integrated with forward chaining. When propogating assertions (or retractions) through the rule network, backward chaining will be automatically invoked. [Currently, backward chaining is only invoked when explicitly requested - via (query fact).]
Rulesets have a run-time instantiation now. [They were just collections of rules before.] When a ruleset is activated, a ruleset instance is created. The ruleset instance contains the agenda (for rule instances from the ruleset). This allows different conflict resolution strategies to be applied to rulesets within a single inference environment. [It will also allow us to experiment with conflict resolution among rulesets in the future.]
Certainty values for facts are implemented. A certainty may be specified when a fact is asserted and these are maintained as facts are inferred via rule firings.
Boolean connectives are allowed among rule preconditions. This simplifies writing complex rules.
This is a fairly substantial rewrite of the inference collection and should result in a better package all around. Our agent-based simulation work is leading us to look at less monolithic knowledge bases and more distributed ones. Although Version 3.0 is not truly distributed, it should help us along those lines as PLT Scheme continues to mature.
I'm not aware of any regular users outside of our small cadre of agent-based simulation folks. But, if there are, I'd like to solicit your inputs on any changes you might like and also make you aware of specific changes to the collection.

Labels:

Monday, December 01, 2008

Simulation Collection Update

I have updated the PLTScheme Simulation Collection to Version 3.2. This was primarily to release the Scribble documentation. The Scribble documentation is about 90% complete, which is a bit more than the previous HTML documentation.

I have begun adding new graphics capabilities to the simulation collection. So far I have added variable-slider%, variable-gauge%, and variable-message% classes. These are subclasses of the MrEd slider%, gauge%, and message% classes that are automatically synchronized with a specified simulation variable.

The variable-slider% class sets its associated variable to the value of the slider. That is, whenever the user moves the slider, the associated variable value is changed. This is input only.

The variable-gauge% class updates the gauge whenever the value of its associated variable is changed. The range of a variable-gauge% object may also be associated with a simulation variable. This is output only (for both the value variable and the range variable, if any).

The variable-message% class displays its associated variable's value and is updated whenever the value of that variable is changed. This is output only.

The graphical open and closed loop examples have been updated to show the use of these classes.

As usual, these are available on the Schematics project at SourceForge. I will update the PLaneT package when everything is stable and documented.

There are several other variable controls I plan on implemented in the next few weeks. The variable-text-field% class will provide both input and output for its associated variable's value. The variable-plot% class will provide a plot capability for variables. Initially, plots will just be time-value plots, but it will support multiple variables on each plot.

Currently, for my agent-based simulations I am using the animated-canvas% class to display the cell grid (or whatever 2D or 3D representation is appropriate). I need to generalize that as part of generalizing the agent-based capability. [Although simulation processes aren't very heavy-weight, continually adding and removing many thousands of them from the event list each generation is quite time-consuming.] That is next on my to do list.

Labels:

Friday, September 26, 2008

PLT Scheme Science Collection Version 4.1 Released

I released Version 4.1 of the PLT Scheme Science Collection to PLaneT earlier this week. There were no functionality changes, but the documentation has been converted to the Scribble format.

As usual, it is available from the PLaneT repository using the form:

(require (planet williams/science/science))

or

(require (planet williams/science/science-with-graphics))

depending on whether or not the graphics routines are needed.

The source code is maintained on the Schematics project subversion repository on SourceForge.

Labels:

Animated Canvas Released to PLaneT

I've released the animated canvas that I use for animated graphics in PLT Scheme. An animated canvas uses two bitmaps to provide a double-buffered animation capability. This is implemented by a new class animated-canvas% that specializes the canvas% class. At any specific time, one bitmap, the background bitmap, is being used for all drawing operations and the other bitmap, the foreground bitmap, is being used to paint the canvas.

The swap-bitmaps method is used to swap the background and foreground bitmaps. When the bitmaps are swapped, the contents of the new foreground bitmap – the old background bitmap – is displayed on the canvas. The new background bitmap – the old foreground bitmap – is automatically cleared, unless specifically prevented by the 'no-autoclear style option.

The device context returned by the animated canvases’s get-dc method is the device context of the background bitmap. This value is not valid across calls to swap-bitmaps. Therefore, it is important to re-retrieve, via get-dc, the device context across bitmap swaps.

The animated canvas also supports resizing on the canvas, which automatically resizes the background buffer after the bitmaps are swapped.

A simple example using the animated canvas is also provided.

As usual, the package is available from PLaneT using the form:

(require (planet williams/animated-canvas/animated-canvas))

The source code is maintained on the Schematics subversion repository at SourceForce.

Labels: ,

Monday, September 01, 2008

Packed Binary Routines for PLT Scheme

I have implemented an equivalent of the Python pack/unpack functions in PLT Scheme. I needed it primarily to be able to (more easily) read binary data from other applications for analysis in PLT Scheme.

The documentation for the Python capability can be found in the Python Library Reference: 4.3 struct -- Interpret strings as packed data. I have implemented the entire capability except for the "p" and "P" formats that encode a "Pascal string". [I don't have pack_into or unpack_from implemented yet - they are new in Pyton 2.5 - but I will implement them before posting the package to PLaneT. The following description is modified from the above reference.

packed-binary.ss

This module performs conversions between PLT Scheme values and C structs represented as PLT Scheme byte strings. It uses format strings (explained below) as compact descriptions of the layout of the C structs and the intended conversion to/from PLT Scheme values. This can be used in handling binary data stored in files or from network connections, among other sources.

The module defines the following functions:

(pack format v1 v2 ...)
Returns a byte string containing the values v1, v2, ... packed according to the given format. The arguments must match the values required by the format exactly.
(pack-into format buffer offset v1 v2 ...)
Pack the values v1, v2, ... according to the given format into the (mutable) byte string buffer starting at offset. Note that the offset is not an optional argument.
(write-packed format port v1 v2 ...)
Pack the values v1, v2, ... according to the given format and write them to the specified output port. Note that port is not an optional argument. [Note that write-packed is not part of the Python module, but was added because of its utility.]
(unpack format bytes)
Unpack the bytes according to the given format. The result is a list, even if it contains exactly one item. The bytes must contain exactly the amount of data required by the format [(bytes-length bytes) must equal (calculate-size format)].
(unpack-from format buffer (offset 0))
Unpack the byte string buffer according to the given format. The result is a list, even if it contains exactly one element. The buffer must contain at least the amount of data required by the format [(- (bytes-length buffer) offset) must be at least (calculate-size format)].
(read-packed format port)
Read (calculate-size format) bytes from the input port and unpack them according to the given format. The result is a list, even if it contains exactly one element. [Note that read-packed is not part of the Python module, but was added because of its utility.]
(calculate-size format)
Return the size of the byte string corresponding to the given format.
Format characters have the following meaning; the conversion between C and PLT Scheme values should be obvious given their types:

FormatC TypePLT Scheme
xpad byteno value
ccharchar
bsigned charinteger
Bunsigned charinteger
hshortinteger
Hunsigned shortinteger
iintinteger
Iunsigned intinteger
llonginteger
Lunsigned longinteger
qlong longinteger
Qunsigned long longinteger
ffloatreal
ddoublereal
schar[]string
A format character may be preceded by an integral repeat count. For example, the format string "4h" means exactly the same thing as "hhhh".

Whitespace characters between formats are ignored; a count and its format must not contain whitespace though.

For the "s" format character, the count is interpreted as the size of the string, not a repeat count like for the other format characters. For example, "10s" means a 10-byte string while "10c" means 10 characters. For packing, the string is truncated or padded with null bytes as appropriate to make it fit. For unpacking, the resulting string always has exactly the specified number of bytes. As a special case, "0s" means a single, empty string (while "0c" means 0 characters).

By default, C numbers are represented in the machine's native format and byte order, and properly aligned by skipping pad bytes if necessary.

Alternatively, the first character of the format string can be used to indicate the byte order, size, and alignment of the packed data according to the following table:
CharacterByte orderSize and alignment
@nativenative
=nativestandard
<little endianstandard
>big endianstandard
!network (= big endian)standard

If the first character is not one of these, "@" is assumed.

Native byte order is big endian or little endian. For example, Motorola and Sun processors are big endian; Intel and DEC processors are little endian.

Standard size and alignment are as follows: no alignment is required for any type (so you have to use pad bytes); short is 2 bytes; int and long are 4 bytes; long long is 8 bytes; float and double are 32-bit and 64-but IEEE floating point numbers, respectively.

Note the difference between "@" and "=": both use native byte order, but the size and alignment of the latter is standardized.

The form "!" is available for those who can't remember whether network byte order is big endian or little endian - it is big endian.

There is no way to indicate non-native byte order (force byte swapping); use the appropriate choice of "<" or ">".

Hint, to align the end of a structure to the alignment requirement of a particular type, end the format with the code for that type with a repeat count of zero. For example, the format "llh0l" specified two pad bytes at the end, assuming longs are aligned on 4-byte boundaries. This only works when native size and alignment are in effect; standard size and alignment does not enforce any alignment.

The current implementation may not properly handle native alignment in all cases. For the current implementation, the native alignment is assumed to be the same as the size. This may result in excess pad bytes, particularly for 8-byte objects.

I will be putting the code up on the Schematics project on source forge (probably tomorrow) and it will be available on PLaneT soon thereafter.

Labels:

Tuesday, July 01, 2008

PLT Scheme on Vista

First off, I'd like to thank Eli Barzilay from the PLT team for helping track down these Vista 'features' that, together with a PLaneT 'feature', are causing my PLT Scheme problems on Vista. We also found the reason for long PLaneT download times on PLT Scheme V4.0 in the process.

This particular problem started when I tried to upgrade to PLT Scheme V4.0.1. After uninstalling PLT Scheme V4.0 (and making sure the uninstall completely removed the C:\Program Files\PLT directory) and installing V4.0.1, I would get the error "collects\srfi\compiled\provider_ss.zo::0: read (compiled): code compiled for version 4.0, not 4.0.1". I reported this on the bug tracker (http://bugs.plt-scheme.org/query/?cmd=view&pr=9554) where you can read the e-mail thread on the problem.

The Vista 'feature' that is biting us is called "UAC virtualization services" and is described in this article on the National Instruments Developer Site. (I'm sure there are many other such explainations that could be referenced.) It's a good read and goes into details that I will only summarize here. In short, if a user application attempts to modify protected portions of the file system (like C:\Program Files), the UAC virtualization service silently redirects the access to an unprotected, site-specific area (like C:\Users\doug\AppData\Local\VirtualStore\Program Files). Subsequent reads of affected files will use the copy in the virtual store. The problem is that install and uninstall (and probably some other application specific programs) don't know about the virtual store and files stay persistant across uninstall/install boundaries.

This was coupled with the current PLaneT 'feature' (http://bugs.plt-scheme.org/query/gnatsweb.pl?debug=&database=default&cmd=view+audit-trail&cmd=view&pr=9555) that compiles PLaneT packages with errortrace (i.e., debugging) when the initial require is from within DrScheme - it doesn't seem to happen from within MzScheme. In my case, when I would require the science collection (for example) from within DrScheme, it would download the collection and compile it and all referenced collections with errortrace turned on. This would recompile the referenced PLT Scheme collections (in C:\Program Files\PLT\collects) and the compiled files were put in the virtual store. This works okay until the uninstall of V4.0 and install of V4.0.1 where the compiled files in the virtual store (from V4.0) were seen instead of the newly installed files (from V4.0.1).

In any case, you can read the e-mail thread in the bug list for the details of how we finally tracked the problem down.

There are some work arounds that can be used until the PLT Scheme team figures out the best way to 'play nice' with Vista - although it would be nice if Vista played nice with legacy applications, but don't hold your breath for that.
  1. When you do an uninstall, check to see if C:\Users\<user>\AppData\Local\VirtualStore\PLT exists. If it does, delete it also. This will remove the user modified files that might exist - even if they didn't realize they had modified any. [Note that the location of the virtual store may be different, so you might want to do a search for 'PLT' and include system and hidden files in the search. Also, if there are multiple users, they may be multiple virtual stores.]
  2. For the initial load of PLaneT collections where you don't want errortrace turned on, do the initial require from MzScheme. For example, start up MzScheme from the command prompt and type "(require (planet "science.ss" ("williams" "science.plt")))" to load the science collection. This should work for any collection.
I will be 'upgrading' by laptop from Vista to XP and at least the first problem will go away. I would still recommend doing the initial require for PLaneT packages from MzScheme. I still have my Vista machine at home and will be dealing with some of these issues in an on-going basis.

Labels:

Thursday, June 26, 2008

Progress Updating to PLT Scheme V4.0

Now that PLT Scheme is official released, I am updating the science, simulation, and inference collections to be compatible with the new version. These changes are incompatible with pre-V4.0 versions of PLT Scheme and will be stored in the PLaneT 4.x repository. Anyone using pre-V4.0 code should continue to use the versions in the PLaneT 3xx repository. V4.0 compatible versions of the science collection (Version 3.0) and simulation collection (Version 3.0) have been released already. I am now working on the V4.0 version of the inference collection (Version 2.0) and expect to release it this weekend.

There were several rather simple global changes that occurred in all of the code. For example, using the new #lang construct for all modules, using modules for all examples, hash table name changes, changes to require constructs, some contract changes (in particular union/c becomes or/c), and define-struct syntactic changes.

Some other changes were more difficult. The changes to keywords and keyword parameters required individual attention to each specific usage. Fortunately, much of my usage of keyword parameters is in macros and those still work the same in V4.0. I often use keywords as special values (e.g. #:now). These used to self evaluate, but now require quoting. Fortunately, these don't occur in user code. The other major problem is with mutable lists. These also have to be looked at on an individual bases. In some cases, I reverted to immutable lists (e.g. when using reverse! to reorder a constructed list being returned from a function) and in other cases I converted them to explicit mutable lists. It can be a chore to track down all of the references in a large body of code.

One big remaining issue is converting the documentation over to use Scribble. Unfortunately, the existing HTML documentation for PLaneT packages is not made available in the new Help Desk. [I honestly think that they should retain the ability for developers to distribute HTML documentation for PLaneT packages.]

PLT Scheme Science Collection: The science collection was rather straightforward to convert since I had kept the older versions compatible with both 372 and 399 (the V4.0 pre-release). Version 2.9 is the last pre-V4.0 compatible version and Version 3.0 is the new V4.0 compatible version.

PLT Scheme Simulation Collection: The simulation collection was more complicated to convert since it mutates lists in the implementation of the event list and variable histories (and probably a few others that I don't remember). I also had some problems with keywords since I had implemented my own keyword parameter scheme before it was added into PLT Scheme. Version 2.2 is the last pre-V4.0 compatible version and Version 3.0 is the new V4.0 compatible version.

PLT Scheme Inference Collection: The inference collection makes extensive use of list mutation and I am still tracking down and fixing them on an individual basis. I expect to have the V4.0 compatible version out this weekend.

If anybody runs into any problems with any of these packages, please let me know.

Labels:

Thursday, March 27, 2008

Homogeneous, N-Dimensional Arrays (ndarrays)

The basic representational element for PLT Scheme Schemelab is a homogeneous, n-dimensional array - or ndarray. This post will discuss the initial design for ndarrays and discuss some of the operations on them.

Internally, an ndarray is represented by a structure with the following elements:
  • ordering - (symbols 'row 'column), specifies the order in which dimensions are stored ('row for row major and 'column for column major). Generally, this isn't that useful except for (eventually) interfacing with foreign code (C uses row major and FORTRAN uses column major). Also, the transpose operation converts from one to the other (although that is more a side effect that its real functionality). [read only]
  • shape - (listof natural-number/c), specifies the shape of the array. The length of the shape list is the number of dimensions for the array and each element is the cardinality of the corresponding dimension. This may be set to reshape the array.
  • ndim - natural-number/c, the number of dimensions for the array. This is equal to (length shape). [read only]
  • size - natural-number/c, the total number of elements in the array. [read only]
  • disp - natural-number/c, the displacement (in elements) of this subarray is the data vector. This will be zero for any ndarray that owns its own data. [read only]
  • strides - (listof natural-number/c), a list of the number of elements to skip to move to the next element in the corresponding dimension. [read only]
  • offset - natural-number/c, the offset (in elements) for each addressed element. This will be zero for any ndarray that owns its own data. [read only]
  • data - typed-vector?, the typed vector containing the data for the array. [A typed vector encapsulates an SRFI 4 vector (for u8, u16, u32, u64, s8, s16, s32, s64, f32, or f64 arrays), a complex vector (for c64 or c128 arrays), or a Scheme vector (for Scheme object arrays).] [read only]
  • base - (or/c array? false/c), the base array for this (sub)array or #f if the array owns its own data (i.e. it is a base array). Note that the referenced array may itself base a non-#f base entry. [read only]
Note that I may not need both the displacement (disp) and offset fields. But, the general idea for array iteration is to add the displacement once when the iterator is initialized and to add the offset for each element. It might be possible to collapse those into one offset without loss of generality.

The most general array constructor is make-array, which has the following contract:


(->* ((listof natural-number/c))
(#:vtype (or/c vtype? symbol?)
#:ordering (symbols 'row 'column)
#:fill any/c)
array?)


The required argument is the shape of the array. The optional keyword arguments are #:vtype, which specifies the type of the array (default object), #:ordering, which defaults to 'row, and #:fill. If #fill is not specified, the array elements are not initialized.

Example:

(define a1 (make-array '(3 2) #:vtype f32 #:fill 0.0))

Creates an array whose shape is '(3 2) (i.e. three rows and two columns) and whose elements are 32-bit floating-point numbers that are initially 0.0.

A fully qualified reference for the array returns the corresponding (scalar) element. For example, (array-ref a1 '(1 1) returns the value of the element at '(1 1).

A partially qualified reference for the array returns a reference to the corresponding subarray. For example, (array-ref a1 '(: 1) ) returns the subarray of shape '(3) that corresponds to the second column of a1. [Note that this is a reference to the second column of a1 and not a copy of it. Array referencing never copies anything.] Likewise, (array-ref a1 '(1 :)) [or equivalently, (array-ref a1 '(1))] returns a reference to the second row of a1.

Array mutation also allows partially qualified references. In that case, the value specified must be broadcastable (described in a future post) to the shape of the reference. For example, (array-set! a1 '(: 1) 1.0) sets the elements of the second column of a1 to 1,0 (yes, the scalar 1.0 is broadcastable to shape '(3)). Likewise, (array-set! a1 '(1) '(4.0 8.0)) sets the elements of the second row of a1 to 4.0 and 8.0.

A complete set of array manipulation functions will be provided. This will include: build-array, array-map, array-map!, array-for-each (i.e., similar to what SRFI 43 provides for vectors).

I haven't though about the semantics for things like array-fold and array-unfold. They may be well-defined someplace - I really haven't looked. If anyone knows where or has any ideas, please let me know.

Note that it is not my intention to define functions like add, subtract, multiply, divide, ... (as numpy does for Python). Rather, we will rely on the array mapping functions to extend scalar operations to arrays.

Labels:

Schemelab Concept

My PLT Scheme project for this year is to create a Schemelab collection to provide better analysis capabilities in PLT Scheme. It will be something of a mini-Matlab with capabilities similar to what numpy, scipy, and matplotlib provide in Python. I plan on making a new numeric collection to provide homogeneous, n-dimensional arrays (and, as a subset, matrices) as the primary underlying representation. Next, the science collection will be updated to use the new numeric package. Finally, a new plot collection will be developed to provide better visualization functionality. [After these are done I'll also update the simulation and inference collections to use the new Schemelab packages.]

I've already started work on the numeric collection. The basic representational element is a homogeneous, n-dimensional array (ndarray). An ndarray's type and shape are specified when it is created. The type may be any of the element types allowed for SRFI 4 vector types (u8, u16, u32, u64, s8, s16, s32, s64, f32, or f64), a complex type (c64 or c128), or may hold any Scheme object (object). The shape is a list of natural numbers where the length of the shape is the number of dimensions for the ndarray and each element is the cardinality of the corresponding dimension. References to array elements will support array slicing (in any dimension) for both array accessing (array-ref) and mutation (array-set!). Slicing operations create new views of the referenced array as opposed to copying portions of the array. I will start posting entries on different aspects of the numeric collection in the next few days.

The updates to the science collection to support (or to make use of internally) the new numeric collection should be relatively straightforward. The main issue will be to retain compatibility with the existing data types (i.e. vectors). Most of the really numerically complex code (e.g. special functions and random number distributions) will not be affected since they don't generally provide vector or array operations. The main changes will be to the statistics and histogram modules, which will be modified to work with ndarrays as well as vectors. Finally, some of the modules (e.g. histograms and ordinary differential equations) will benefit from being reimplemented using ndarrays internally.

I've also prototyped some code for the new plot package that provides much more functionality than the current PLoT package. The initial capability will be very much patterned after the functionality provided by Matlab (or more precisely, matplotlib for Python, which is also based on Matlab). It provides precise control over the elements of a graph (or multiple subgraphs). It also provides interactive graphics functionality for more dynamic analysis capabilities.

Note that all of these new (or updated) collections require PLT Scheme Version 4 (currently 3.99) and are not compatible with earlier versions. As such, I am not releasing them to PLaneT until either PLT Scheme Version 4 is officially released or there is a separate PLaneT repository for PLT Scheme Version 4 code. I will be putting the code on the Schematics web site at some point.

Labels:

Monday, November 19, 2007

PLT Scheme Science Collection Version 2.8

Version 2.8 of the PLT Scheme Science Collection has been released via PLaneT. There are two major changes from Version 2.7:
  • The random-sources.ss file has been reverted back to its previous implementation. This is because the interface for SRFI 27 was changed to be compatible with the pre PLT Scheme V371.1 version. That is, Version 2.7 is only required for PLT Scheme V371.1 (which probably isn't being used by anyone). Version 2.8 will work with V371 (and before) as well as with V371.2 and later.
  • Unchecked versions of many of the functions are now provided that bypass the contract check. These are called internally where it is known that the contract would be met. Obviously, they may also be called from user code in similar circumstances or where the contract check is not desired for some reason.
The unchecked version of a function is provided by renaming the function in a provide statement. The standard version is provided in a provide/contract statement. A trivial example is:

(module test-unchecked mzscheme

(require (lib "contract.ss"))

(provide
(rename double unchecked-double))

(provide/contract
(double
(-> real? real?)))

(define (double x)
(+ x x)))


This works fine. But, I'm not sure there is any guarantee that it will work in the future.

This will probably be the last revision before the release of PLT Scheme v4.0. I have branched the source code (at the Schematics project on SourceForge) for the PLT Scheme Science Collection Version 3.0 to be released with PLT Scheme Version 4.0. These changes will be tested using the PLT Scheme v3.99.x releases.

Labels: