Juggling has been around for many centuries as an art form. It has only been within the past fifteen years that juggling has become the interest of both scientist and mathematicians alike. It has been during this past decade that various numerical systems have developed in order to ease the explanation of various patterns to jugglers across the world. The most widely accepted notation, siteswap, was developed in 1985 by three independent people: Bruce Tiemann, Caltech; Paul Klimek, Santa Cruz; and Mike Day, Cambridge. The work of these three men eventually laid the foundation of much more mathematical formulation of the juggling time and space.
The use of siteswap notation has become quite popular over recent years because of the ease of use. While it does not define more complex problems and patterns in it's basic forms, extensions have been developed to add more robustness to the system. Patterns are first displayed using a string of integer numbers. These numbers display the number of throws necessary before that particular object will be thrown again. Each throw instance in the pattern represents an alternating hand sequence also. By using this idea, the basic three ball cascade would be denoted by:
3L 3R 3L 3R 3L 3R 3L 3R 3L 3R 3L 3R
However, to ease the use even more, one is able to only report the first period of any pattern. This would make the above pattern become .
The basic pattern here for N number of objects can easily be displayed by the vector [N]. This allows you to easily see how many objects are within a pattern. Patterns that are more complex and have a period greater than one (L>1), can still be easily checked for object number. A simple act of finding the mean of the individual throw values will display the number of objects.
A pattern that does not have an integer value for the number of objects is not feasible within the discussion of this project. While a non-integer value is sufficient for refusal, an integer value is not sufficient for acceptance. This can be seen by using the patterns [5 4 3] and [3 4 5]. While both of these have a mean of four, only the second one is a possible pattern. By finding a landing schedule for each object (a time N throws in the future), one sees that the first three objects in pattern one will land simultaneously. Without the ability to catch and throw more than one object at a time (multiplexing), this could not work. While that is possible, but not discussed here, the pattern would have to be extended to display that time instance.
From this very basic introduction to siteswap, one can see the ease of use. While there are some complex mathematical roots, it is a system that requires very little math skills to use. That has been a major factor in it's rise of popularity over the past ten years. While other systems, such as state diagrams and ladders, have been developed, they have seen little use because of the complexity and the difficulty in transferring information in those formats.
For the past several years, it has come into question about how to find transitions between two patterns. This pursuit has led to more of a mental exercise for most jugglers. There has not been found a suitable and effective way in order to find transitions. There is currently one computer program available over the internet that will demonstrate various transitions. One drawback from the program is that the transitions are done without the user know what they are. Also, they are often not transitions that are simple or "juggler friendly."
For this project, I wanted to develop a "smart" transition engine. I was looking for something that could give the user an array of transitions. In order to ease the initial robustness problem, I decided to focus only on patterns that use the same number of objects and where no more than one can land at any given time. Another limitation that I used was that of a maximum throw value of nine. In actual juggling practice, there are few people in the world that can consistently make throws of values larger than that.
There are some combinations of patterns that can be considered "steady state." This implies that there is no transition needed. While no transition may be necessary, I felt it would be of interest to find other transitions. By doing this, one can make the pattern transitions more flashy and, hopefully, crowd pleasing.
My original approach to the transition solution was to use a neural network. I chose this for two reasons. One was that I am relatively familiar with the neural network theories. The other reason was the ability of neural networks to learn patterns and optimize solutions.
From the beginning I realized this approach would require quite a lot of work. The first problem I noticed was that juggling patterns don't all have the same periodicity. Because of that, there was no sure size for the input vectors. Even by using the concept of "dead" or inactive neurons, the system would not reach the robustness I had hoped for. A reason for that is the uncertainty in the length of the desired output vector.
Since there is always more than one possible transition for two patterns, the length of the output is very crucial. For an initial pattern of period L and transition of length n, there is always a transition of length L+n. I was looking for a system that could find transitions of varying lengths in a simple and quick fashion.
In an attempt to overcome this latest problem, I ventured into the workings of modular networks. These would allow me to have several smaller networks each finding a transition of a particular length. While I still feel this approach could produce sufficient results, I found myself looking into the field of genetic search algorithms.
The field of genetic algorithms was a new field for me, but upon some initial reading, I felt it could produce the results I was hoping to obtain. I was encouraged by the ability to create a search that does not need to be exhaustive. I had considered doing some sort of search early, but discarded the idea because of the amount of search time needed.
Because I am searching for transitions from a length of zero up to the number of balls in the pattern, the searches became exponentially larger. For each search, there are 1Ex possibilities (x is the transition length), because possible throw values range from zero to nine. While these times are short for lengths of zero and one, the times for longer lengths are costly in time.
The genetic search begins with a generation of transitions that are random integers in the given length. The size of the generation effects the search time in two ways. First, a small generation will shorten the search time, but not require more generations to cover the set. Second, a larger generation increases search time, but covers a larger range of transitions faster.
Immediately each of these first generation transitions are tested to see if they work with the two patterns entered. If one of these work, the value will be returned to the program and the search will be called again for the next search length. If there are none of the first generation transitions that work, we will need to create the next generation.
Each consecutive generation goes through what is called a mating process. During this time, the previous generation is given a fitness value that is used to select which transitions will be used to mate.
The fitness function quickly became an interesting problem within itself. Here is where I put down the books and picked up my juggling balls. I quickly went through several of the tricks that I knew to try and see what transitions I use. Next, I sat down with a large list of patterns and tried to find a couple of transitions on my own. From doing this I noticed that most transitions have throw values that are close to that of the number of objects within the pattern. I used this observation to construct a discrete probability function for each throw value and object number being used. The values used for the fitness function can be seen below in the search results section.
After calculating the fitness values for the previous generation, the values are normalized so they sum to one. Next, random numbers are selected and the transition that had it's value in that range was chosen for mating. After the next generation size is complete, the mating of transitions actually occur.
The mating function uses two separate random vectors. The first is a random permutation of the positive integer values up to the generation size. The other is a string of random integers up to the length of the transition, including zero. These two are used to select the mate and genes for mating respectively. For the first member of the current generation, the first values from each of the above vectors are selected. The first random value will select the other generation member they will mate with. The second random value will identify the number of genes from the end of the string that will be used to replace those on the current member. This continues for each member of the generation.
From this point there is only one step left in the genetic algorithm before another validity search is called. This step is gene mutation. Mutations in nature occur only rarely, but some of these mutant genes have provided that animal/plant to have an extra advantage over it's counterparts. This is the idea behind using mutations in the algorithm.
These mutations are calculated quite easily. The first step is to chose a percentage value of mutations, usually quite low. Next, there is a systematic movement from gene to gene throughout the generation. At each gene a random number is chosen. If this number is less than the percentage value chosen, a new random integer is chosen to replace that gene.
By using mutations I was able to prevent the search from stabilizing on the most fit values as defined by the function. A higher mutation rate creates a more random search. With a lower mutation rate, the search is still semi-random, but still has some direction.
My initial search results proved to be quite promising. The search provided several options for transitions between a wide variety of patterns. The search time however, was not as fast as I wanted. For short patterns, and few objects, the times were very good. However, as pattern period and object number increased, the search times exploded.
I tried to reduce the search time by several methods. The first method was to work on optimizing the code that checks for transition validity. This actually helped a considerable amount. I moved on from there to the variables within the search itself. Because the search quits when it finds the first solution, it was difficult to tell how effective generation size was on the search times. Reducing the maximum number of generation had considerable results for search times, but also reduced the number of solutions presented on average.
I tried using the two different representations of the fitness function. The first was a matrix with separate values for each throw value and object number. This did work, but didn't have much effect on the search times in either direction.
The other fitness values used were based upon the ones seen above. I tried to create a cumulative view that would allow one vector to have the same effect. I summed the fitness of each throw value over ever object number listed and used those as the fitness function. Below is a graph of the cumulative values and how they appear when normalized.
As you can see from the graph, this approach creates a relatively flat normal curve that is skewed to the right. I was hoping that this approach would place the values below three into a range where they would be less fit than the others. That is exactly what happened. I wanted that because of the so called crowed value associated with juggling. Values less than three, while helpful at times, are relatively boring to watch. These values also didn't have a large effect on the search times, but created sufficient results.
The only other major variable in the search algorithm was the mutation rate. I began very low (about 1E-4), and gradually increased it. As I increased the percentage the search times slowly decreased on average. This was due mainly to the fact that the algorithm was finding more transitions and thus not reaching the maximum number of generations as often.
After running through the program many times with a large assortment of patterns, I began to notice another occurrence. Most of the found transitions were of length five or less. I have since then concentrated much of my effort on the patterns for larger numbers of objects. While occasionally a transition will be found of a length greater than five, most of those are repeats of the previous finds.
Because of that, they are really not needed and are very costly in computation time. By limiting the transition length to five instead of the number of objects, time was reduced considerably on average. This will allow me to increase some other variables, such as generation size, and hopefully increase the number of found transitions.
While there are certainly unique transitions that fall outside of this range, I don't feel that they are worth the computational effort to discover. When actually performing, a transition that is too long will seem like it's own pattern and not really worth the effort.
This first phase of the project has proved to be very promising thus far. The next sections will provide a glimpse of how this phase can be extended further and hopefully produce equally exciting results.
MHN was developed in 1991 by Ed Carstens, University of Missouri - Rolla. It is based mainly on the siteswap notation described above. The use of MHN allows the description of patterns in which more than one set of hands is involved. Thus it allows objects to be exchanged between jugglers.
This more complicated approach is written as a three dimensional matrix of values. The first two dimensions contain the particular throw values for each object at each time for each person involved. Each person has their own pattern that is described as a row vector as described by siteswap. All patterns are then put together to form the base matrix. One limitation is that all patterns involved must have the same period, or else the matrix is not complete.
The third dimension of the matrix is considered the relative throw matrix. A corresponding value for each throw in the base matrix is given. This matrix is allowed to have both positive and negative values. The values correspond to the relative row, or person, to which the object is being thrown. Thus a positive number will move the object down a specified number of rows while a negative number will move the object up in the matrix. As expected a zero value refers to a toss thrown to oneself.
There are still some great, yet simple, mathematical properties associated with this notation. The first is the basic averaging property of siteswap. By averaging the throw values in the entire base matrix, one will get the total number of objects being juggled. You could then divide that by the number of rows, people, in the matrix to find the average number of objects per person. Warning here, this number may not always be an integer value and need not be. This would be evident in a pattern such as seven club ultimate, where only two people juggle seven objects.
Another really nice property comes from the relative throw matrix. All values within this matrix can be summed up to zero. Any matrix that does not sum to zero results in a person holding more objects than they started with and someone else having fewer.
A similar approach to that describe above should be sufficient to determine transition between these much more complex multi-person patterns. While the theory behind them is the same, it will require a much more sophisticated algorithm to determine whether or not a given transition is plausible. The validity algorithm is the only major change from this initial search. The same arguments could be made to describe the passing transition as that of the regular transition.
This however, is a very new area to pursue. I am not currently aware of any substantial amount of work previously done within this area. It would be something new that I feel could benefit a number of people. Not only would jugglers benefit from the use of the application, but searching for complex patterns is also a very useful area.
The final aspect that I would like to pursue with this project is the simulation of juggling using a neural network. There are currently a number of programs available that are able to simulate juggling using a computer. What I am wanting to develop is something in which the computer itself learns to juggle. I want something that requires very little programming and allows the computer to learn and adjust itself.
That is why I began with the juggling transitions. I feel that transitions are the key to a realistic juggling simulator. The transitions will insure that the patterns are not just being learned by the computer. It will help the computer to learn the concept of juggling. Although it requires a lot of time and effort, it is not real difficult to teach a computer how to juggle. Teaching a computer to learn to juggle is something altogether different and allows less effort from the programming side.
The idea of teaching a computer to juggle could of course be extended to the use of robots. There has been some work done in the past on developing a juggling robot. Although some of these juggling machines have been developed out of ingenuity, there has not been one developed that can effectively juggle in a human-like fashion. By developing a smart computer that can juggle, extending that learning to a robot would be a considerable jump in the area of robotics movement and learning.
Although I will not pursue the robotics aspect due to lack of time, it is a very interesting field. It could be developed using several different techniques. The most basic juggling pattern would allow the robot to learn throwing and catching. More complex patterns could be added to test it's robustness.
A couple of different ways could be used to implement the visual effect that people receive while juggling. One could be the use of a computer vision system that could learn to watch critical paths of the objects. Another possibility could be the use of sound or electronic signals from each objects to describe it's position.
Altogether, this smart transitions project has some very interesting and simple roots. Despite the seeming simplicity, the available complexity that it can extend to provides a considerable problem. Smart engineering systems have proven effective thus far and look promising for future applications and extensions.
Hagan, Martin T., Howard B. Demuth, Mark H. Beale;
Neural Network Design; PWS Publishing Company; 1996
Goldberg, David E.; Genetic Algorithms in Search
Optimization, and Machine Learning; Addison Wesley Publishing
Co., Inc.; 1989
Knutson, Allen; Siteswap FAQ; Juggling Information
Carstens, Ed; Mathematics of Juggling; 1992
Truzzi, Marcello, Massimiliano Truzzi;
Notes Toward a
History of Juggling; 1974
Note: Information on juggling was found at the Juggling
Information Service. The website can be found at