# Mutual Interaction

These examples (provided by Golan, with only minor changes by Roger) illustrate the power of mutual interaction.

In my previous notes, I showed an example of attraction between particles and fixed locations using the inverse square law (attraction is scaled by 1 / (distance*distance).

Now, we consider interactions *between every pair of particles*!

## Mutual Repulsion with Optional Gravity

Here, we create `nParticles`

(= 400). The idea here is that particles try to move away from each other. To implement this idea, we represent “try to move away” as a force. The closer particles are together, the greater the force that pushes them apart. In fact, we use our recent friend, *the inverse square law*, and make the force proportional to 1/(distance*distance). As distance increases, this force approaches zero, but at small distances, e.g. distance = 1, it is significant. In fact, the force is infinite at zero distance, so to avoid huge forces and the resulting high velocities, we do not apply the force when distance < 1, figuring that with particles in constant motion, the two very-close particles will soon drift apart to a distance greater than one, at which point the opposing force will kick in again.

### Implementation

We use a nested loop: for each particle, we want to compute the force applied by each other particle. All the forces add together, and in fact, the final net force may be zero due to particles pushing in opposite directions.

Looking at the inner for loop, you would expect `j`

myParticles.length, but instead, `j`

runs from 0 to `i`

, where `i`

is the index of the particle. Why? If we want to compute the force from every particle,

why does the inner loop only consider some of the particles? In fact, the inner loop *could* loop over every particle, but notice that when you know the force of particle A on particle B, you also know the force of particle B on particle A. This particular implementation optimizes the computation by computing forces only in one direction, from let’s say A to B. When force is applied to A, we simply apply the negative force to B, eliminating the need to (re)compute half of the forces. Looking at the end of the inner loop, you see

`ithParticle.addForce( repulsionForcex, repulsionForcey); // add in forces`

jthParticle.addForce(-repulsionForcex, -repulsionForcey);

In the inner loop, `j`

is always less than `i`

, which eliminates half of the interactions, but we make up for this by updating *two* particles on each iteration of the inner loop. (Also, we eliminate the force of a particle on itself, i.e. where `i === j`

.

## Efficiency

How long does it take to compute the `draw`

function? Let’s think about what’s involved:

- There are fixed costs per draw: clearing the canvas, initializing variables, etc.
- There are per-particle costs: computing forces on the particle, drawing each particle.

In many cases, graphics operations dominate the cost of computation. Think about the cost of computing a few numbers like x, y, velocity compared to the cost of computing and drawing the outline of an ellipse pixel-by-pixel, and filling the ellipse, again pixel-by-pixel. Filling an area with pixels is at least some kind of nested loop, and the work is at least proportional to the number of pixels involved.

If there are not too many particles, you could correctly assume the cost of draw() is *proportional* to the number of particles. In computer science, we use N to represent the “size of the problem,” e.g. the number of particles (or N could be the number of characters of text to be processed by some text processing program, or N could be the number of data records to be searched in a database application, etc., but here, N is clearly the number of particles.)

One of the great achievements of computer science is the idea that we can think about how run-time depends upon N. We just said run-time is *proportional* to N, or in other words, run time is roughly N times some number, e.g. maybe it’s N * 0.0001s, meaning drawing takes about 0.0001 seconds per particle. (Yes, there’s also some fixed costs to clear the canvas, etc., but these matter less and less as N gets larger and larger, so the cost is dominated by N.)

But wait! What about the inner loop and force computation? How many force updates are there? If there are N particles, and each particle requires N/2 force computations, the total number of steps is N * N/2. Yikes. Removing the “/2” which is just a scale factor that does not change with N, we have N^2 (N-squared) operations. If N is small and computation is fast, this is no big deal compared to the huge overhead of drawing all those pixels for each particle, but what if N is larger? Suppose N = 10,000? How many force computations? 10,000 * 10,000 = 100,000,000. Double yikes! My computer takes about 2.5 seconds to do this much work. That’s nowhere close to animation rates. What can you do about it? Probably using another programming language (e.g. C or C++) would speed this up 10-fold. Doing the computation in parallel (more programming, more hardware) could get us up to animation speeds. But then, what about N = 100,000? Now, we’ve added 100 times more work and we’re in trouble again.

Conclusion: Algorithms matter! Computers are not infinitely fast, and you now know enough to write interesting computations that will bring your computer to its knees. Some simple analysis can tell you how your performance will depend on problem size (N). It’s the nature of the scaling (is it proportional to N? N squared? 2-to-the-N power?) that is most important when N gets large.

## Flocking

This is a brief introduction to a very interesting phenomenon called “flocking.” A more complete introduction to flocking is available online, as is a great deal of literature. Our goal is to illustrate the main concepts. We might ask you about flocking on our written exam, but do not expect you to be able to implement a complete flocking algorithm without further study.

Flocking is inspired by flocks of birds that achieve mass coordination of movement even though each bird is making independent decisions. How does the flocking behavior emerge from local behavior? In the flocking model, often referred to as “boids” using the term coined by the inventor of the model, Craig Reynolds, there are 3 rules:

- Separation — avoid collisions with neighbors
- Alignment — steer in the same direction as your neighbors
- Cohesion — steer to the center of your neighbors to stay together

Let’s implement these rules in a particle system. The following is an example. There are comments in the code. This is based on code by Conrad Parker, but I found the separation routine caused jittery motion as particles moved in and out of the neighborhood, so I modified the force calculation to increase smoothly from 0 at the edge of the neighborhood. I also added an obstacle in the middle of the canvas that the particles avoid. Finally, the Particle object has some changes to allow forces to be summed before modifying velocity (requiring force to be explicitly zeroed at the beginning of each draw). For more info, see comments in the code.