"If you wish to advance into the infinite, explore the finite in all directions."

 

                             - Johann Wolfgang von Goethe, Epigrams.

 

 

 

 

| Infinite Staircase | Newton's Method | Randomness | Fractal Mountain & Cloud | Multi Fractal | Image Compression |

 

 

 

 

 

 

 

 

 

 

 
Fractals come in all shapes, sizes and forms. In this chapter, we will examine fractals with additional qualities to the conventional ones we have seen. These fractals typically exhibit features with unique attributes, focusing interest in examining their special properties. Some as we shall see try to mimic nature by forming crystals or mountain shapes, while others create mathematical forms with no apparent connection to the physical world. The first one we will look at is a fractal called the Infinite Staircase derived from principles of linear fractals.
 
The Infinite Staircase (top)
 
The infinite staircase often referred to as the devil's staircase, conceits of an infinite number of steps within a contained length. This staircase has steps with continually finer steps in-between, so no matter how small of a step you need to take there will always be one to accommodate you and of course an infinite number in-between. The some of the steps overall length is equivalent to the removed sectors of the Cantor set seen in Chapter 3.
 

Figure 6.1 The first stages of the Infinite Staircase.
 
 

Here we will look at how the Infinite Staircase is constructed.
 
         • First a  x  unit square is constructed this will contain the infinite staircase.
 
         • Next a column with a  width and a  height is placed in the middle on the bottom of our unit square. The coordinates are  lengthwise and  height wise as seen in Figure 6.1.
 
         • In the following step two columns are made each with  the previous column's width  and heights  and  respectively. The coordinates are ,   lengthwise and , height wise.
 
         • The general rule is as follows: columns are placed in every gap's center at  the width of the previous placed columns, with an average height equal to the columns on either side. In the case of the outer columns heights  are given as  and  respectively.
 
Information about components added for each level  of the infinite staircase:
 
Number of columns .
Width of columns .
Height of each column is equal to the coulomb's center position on the horizontal axis.
 

Figure 6.2 The column data for changing levels and corresponding removed sectors of the Cantor set.
 
 

The limit as the level approaches infinity is call the infinite staircase or devil's staircase see Figure 6.3.
 

Figure 6.3 The completed Infinite Staircase.
 


Figure 6.4 The perfect match, seen as another Infinite Staircase is rotated 180° and placed next to it .
 

If a duplicate staircase is made and it is rotated by   both parts could be brought together to form an exact fit see Figure 6.4. From this we see that its area has to be equal to half the area of the square. The same results can also be shown by adding up all the individual column's area that
 
form the series
or
 
.
 
Since the steps consist of perpendicular horizontal and vertical components of the original square its total line length has a value of , a unit line length for each direction.
 

Figure 6.5 Scale Transformations vs. Affine Transformations.
 

If we were to reduce the full staircase by a half in height and a third in length, an affine transformation would be made-- a transformation that preserves a shapes basic integrity. Here the proportions would be changed to . To make the original Infinite Staircase we could use six smaller staircases to replace the original, with no identifiable difference.

 

Figure 6.6 Self-Affinity of the Infinite Staircase.
 
Since the transformations are not an exact scale reduction of the over all image we say the Infinite Staircase is not a self-similar under the rules of scale ability.
 
 
In the next section we will look at non-linear fractals using a process called Newton's Method to generate a wide variety of nonlinear fractal shapes.
 
Newton’s Method (top)
 
" Lo, for your gaze, the pattern of the skies!
What balance of the mass, what reckonings divine!
Here ponder too the Laws which God,
Framing the universe, set not aside
But made the fixed foundations of his work..."
 
                        Edmund Halley, an ode dedicated to Sir Isaac Newton [1] .
 
            Newton’s method applied to polynomial functions is one of the oldest algorithms of numerical analysis. It is a method for finding the root of a function using only information about the function’s value together with its first derivative. While the local behavior often seem predictable, the global behavior can be found to be extremely intricate. Visualized properly, the global dynamics of Newton’s Method can be breathtaking [2] see Figure 6.7 and Color Plates 49-50.
 
Figure 6.7 Examples of image using Newton's Method.
 
            Although the method readily generalizes, we will restrict our attention to the complex plane. Suppose we are seeking a root of an analytic function . Given a starting guess we seek an improved guess . Expanding  in a Taylor series about , we obtain:
 
.
 
Solving for  while neglecting the quadratic error term, we get:
 
.
 
Here we demonstrate Newton's Method by solving . This is done by carrying its iteration to  higher levels for its positive equivalent .
 
            • First we replace  with its known variables, .
            • Now we choose a positive guess value of  this is an arbitrary choice, had we chosen a negative number the answer would converge to its negative root value. The value  can not be chosen here because it would cause the denominator to be which would cause the entire term to go to infinity. After placing in our guess value we are left with a new term of .
 
Here are a list of the first six calculated iterated terms:
 
 
Figure 6.8 Table of iteration values for calculating the using Newton's Method.
 
One can show that for points sufficiently close to a simple root, the number of digits of accuracy in the approximation doubles with each iteration. Each root has a basin of attraction: a set of points which converges to it under iteration of the Newton’s Method. Formally, if   is a root of , we define its basin of attraction  to be:
 
.
 
In our computer experiments, we choose different colors for the different basins of attraction for to obtain a good map of the asymptotic dynamics.           
           

Figure 6.9 Sir Authur Cayley original paper Newton-Fourier Imaginary Problem.
           
            In 1879, English mathematician Sir Authur Cayley (1821-1895) published [3] a description of the global dynamics of Newton’s Method for quadratic polynomials titled Newton-Fourier Imaginary Problem.. Cayley was able to show that the two basins are divided by the perpendicular bisector to the line segment connecting the two roots of the quadratic. However, on the dividing line itself, the dynamics is chaotic (in fact, conjugate to the map ). Immediately, we are led to wonder if there is an analogue to this elegant result for three or more roots. For instance, one might expect the basins of  to divide the plane into three equal wedges emanating from the origin. In fact, the situation is far more complicated see Figure 6.10 and Color Plate 51-52. Using a variation of Newton's Method, the iteration of  , the positive form without can produce interesting results, see Figure 6.11.

 

 

Figure 6.10 Newton's Method for equation  written .
 
Figure 6.11 Iteration of  for [4] .
 
A physical way to visualize this method can be seen with a pendulum and three equally spaced magnets. By releasing the pendulum from different locations and coloring which region the pendulum ends up you can demonstrate the principles of attraction. Different mappings will accrue depending on varying factors of friction and magnet strength, see Figure 6.12 for one possible mapping for a pendulum's basin of attraction. Different regions exhibit different forms, in some regions you will find that all points in that region are attracted to the same attractor (magnet) -- the predictable areas, while in other regions the slightest change in location will result in the pendulum ending up in entirely different regions --the chaotic area.
 
Figure 6.12 A pendulum's basin of attractions with different regions of stability.
 
            Consider a rational function over the complex numbers:
 
 
where  and  are polynomials. A remarkable theorem of Gaston Julia states that the boundary of each basin of attraction for a fixed point or periodic orbit is the same. It is the Julia set for the dynamics which is shown in Chapter 5, and consists of the exceptional points that fail to converge to any of the attracting fixed points or orbits see Figure 6.13. In the case of Newton’s Method applied to polynomials, the theorem clearly applies. If we color the basins of attraction differently to distinguish them from each other, we find that every point in the Julia set has points of all other colors arbitrarily close to it! The Julia set is woven by all of the basins of attraction simultaneously. Not surprisingly, it is typically a fractal.
 

Figure 6.13 The Julia set constructed using Newton's Method with .
 

            Let us now consider a class of polynomial-like functions. An ordinary polynomial is a product of linear terms raised to positive integral exponents. What happens when we allow a product of such terms raised to arbitrary complex components? Set
 
.
 
This is generally a multi-branched function, since it takes on the values
 
.
 
However, if we evaluate the derivative of along the same branch as  itself, Newton’s Method takes a very simple form:
 
.
 
Clearly, Newton’s Method will converge if and only if:
 
.
 As  from above, convergence is slower and slower. In addition, if the exponent  has a non zero imaginary component, points converge along spirals rather than lines, see Figure 6.14
 
 

Figure 6.14 A converging spiral.
 

            Now consider a product of the form
 
.
 
The corresponding Newton’s Method is given by:
 
,
 
where  is a polynomial of degree and  is a polynomial of degree . Even though we began with a function with arbitrary complex exponents, Newton’s Method yields a rational function, and Julia’s theorem still applies. The form above is well-suited to numerical calculations, see Color Plate 53 for illustrations.
 
            By allowing the exponents to be a complex numbers rather than positive integers, we achieve several things: we can explore a continuum of behavior by varying , we can explore the limits of Newton’s Method convergence by letting , and we can explore the rotational effect induced by adding an imaginary component to .
 
            The reader is encouraged to experiment with the program listing provided for Newton's Method applied to the polynomial-like functions we have been discussing, see  Chapter 7 algorithm on Newton's Method or  experimenting with using an expanded version of MandelMovie [5]   by experimenting  with your own equations. See Color Plate 54-55 for examples of generated fractals that you can generate using these methods.
 
Here is one generate using the equation  around , notice the familiar object in the center.

 

Figure 6.15 The reoccurring Mandelbrot set found for .
 
 
In the next section we will explore fractals that use randomness to generate fractals.
 
Randomness and Its Staggering Results. (top)
 
"Everything in the universe goes by indirection. There are no straight lines."
                                                -Ralph Waldo Emerson, 1870

 

Figure 6.16 Brownian motion of a pollen path © 1911 Encyclopedia Britannica.
 
         Randomness is found in many types of circumstances. Often events that are predictable for limited intervals become unpredictable as the forecast of events are extended, this is similar to the transitions found in chaos models seen in Chapter 2. For instance random motion caused by collisions can soon produce perimeters that are too numerous to keep track of, even in situations where slight changes do not produce drastically different results. These paths that arbitrarily change direction are often referred to as Brownian motion.
 
         The first detailed account of such apparent random motion was observed by the British botanist Robert Brown (1773 - 1858). In 1827 Brown examined the motion of pollen particles suspended in water under a microscope. He noticed that their motion would continue in one direction, then would abruptly change course and velocity. Later observation by others including Albert Einstein [6] (1879  - 1955) concluded that such motion was influenced by collisions with unseen particles.
 
         Since these paths exhibit a repeating type of pattern, with non predictable outcomes they fall under the jurisdiction of random  fractals.. In Figure 6.17 you can see a similar pattern at different scales of magnification in a plane produced by this random path. It is common for this type of Brownian motion to be referred to as a 'random walk'.
 
 
" Nothing in Nature is random... A thing appears random only through the incompleteness of our knowledge. "     
                                                                        Baruch Spinoza

 

Figure 6.17 Random Walk at three levels of magnification.
 
Brownian motion -- also referred to as Brownian noise is found in other dimensional settings as well. Examples include: the number of times heads come up in comparison to tails in flipping a coin -- a linear path and the motion of a particle in a gas as it collides with other particles -- a volumetric path.
 

Figure 6.18 Brownian motion at in linear and volumetric measurements.
 

Now lets look at and compare different types of randomness.
 
In another form of randomness, every possible position has the same probability to be reached  every time, independent of prior results. This form of randomness is referred to as white noise. This differs from Brownian motion, where a particles location is influenced by its prior position. Each with its own characteristic properties.
 
Sometimes a combination or averaging of white noise and Brownian noise are used which we call  noise [7] . This is the pattern formation most frequently associated with music, the range found between Brownian noise,  sounds associated with predictable patterns and white noise  or  sounds, associated with static . Linear dependence is increased as the exponent values for  are increased, see Figure 6.19 for a plot this correlation for various sound patterns vs. time [8] .
 

Figure 6.19 Linear patterns of three different type of sounds.
 

Lets look at creating our own Brownian steps vs. time plot.
 
        
       Plotting One-dimensional Brownian Motion.
 
Figure 6.20 Graph paper, coin and corresponding motion.
 
In this exercise we will plot two Brownian motion graphs and compare their differences. You will need two sheets of graph paper and a coin, a die could also be used with heads representing even numbers and tails representing odd numbers. If you don' t have graph paper you can make your own grid or copy the grid from the back of the book in Appendix C.
 
         Step 1: Place a sheet of standard graph paper and mark a starting point at the center line on the left-hand side.
 
         Step 2: Move one square to the right and flip the coin. If the coin lands heads move one square up, if it lands tails move one square down. Now mark the new spot. Repeat this procedure from the spot last marked until you reach the end of the paper.
 
         Step 3: Carry out the same procedure with another sheet of graph paper, then compare the similarities and differences of the two graphs. Most likely you will see a distinct difference even though both were created using the same way.
 
Randomness can be used in making linear fractals too. Here a Koch curve [9] has been produced with seed orientations chosen at random.
 

Figure 6.21 The Koch curve  with random orientations.
 

Another type of alteration is the choosing different function mappings for a given fractal. Here a transformation of the Koch curve [10] is shown under the mapping of , see Figure 6.22.
 

Figure 6.22 The Koch under the mapping transformation of .
 
 

Now lets look at creating our own Brownian motion.
 
       Random walk Game.
 
Step 1: Roll a 6-sided die and multiplying the value by . You can subdivide the direction values even further if you like by rolling the die again and multiplying the new value by  and adding it to your prior results.
 
Step 2: Take a chosen unit step ( 1 cm seems to work well ) in that direction. It helps if you use a protractor  and ruler to mark your direction and distance as you travel.
 
Step 3: Repeat Step 1 and Step 2 until you get some nice random walk patterns.
 
You might make a few different plots and compare their variations. In Chapter 7 there is a program you can copy to let the computer plot randomly generated values of directions for a given step size.
 
Activity: try this exercise in the park and using walking steps.
 
Calculated dimensions of random walks.
 
 
Random walk patterns have a calculated dimension. To visualize this we take a region produced by a random walk and compare it with steps of different scale sizes. For example if the step size is reduced to 1/10 th, the average number of steps needed to reach a given radius would increase by a factor of roughly a 100 ( the range of this value varies greatly,  generally a large average sample is needed to verify these results). This ratio holds true for all varying scales, so we say this scaling is statistically self-similar.
 
The formula for an average distance traveled (radius) by a path works out to be equal to the square root of the number of steps, multiplied by the steps length [11] see below.
 
ratio formula:           
calculated dimension for scale reduced by :         
 

Figure 6.23 Formula for random walk calculated dimension and an equated value.
 

In this section we have looked at randomness in understanding unpredictable patterns and varied audible sounds. Other systems also create varying pattern shapes, one of the most notable is a method called Cellular Automata [12] . In Cellular Automata elements interact with one another to form continuos feedback loops. Sometimes these feed back loops produce very symmetrical patterns similar to the Ethiopian Cross seen in Chapter 3, other times these patterns that look completely random comparable the percolation graphs seen in Chapter 1 and the aggregate growths seen later in this chapter. Here is an example of Cellular Automata producing numeric cells with an endless repeating pattern found in a computer cell simulation game called 'Life' see Figure 6.24.
 

Figure 6.24 Patterns produced by Cellular Automata feed back.
 

In the next section we will use random fractal structure in examining forms found nature.
 
Fractal Mountains and Fractal Clouds. (top)
 
 

Figure 6.25 Fractal mountains and clouds.
 
 

Natural elements have played a crucial roll in advancing fractal geometry, these gains have been spurred on by a desire for  its users to create landscapes similar in appearance to those found in nature. Mountains and clouds are some of the earliest images created to incorporate this mathematical technique. In an effort to create a realistic looking environment a form of fractal generation  emerged that combined predetermined structure with an element of randomness. An early pioneer was physicist Richard Voss (1948 -    ). While at IBM him and fellow researcher Benoit Mandelbrot (1924-    ) developed techniques to create these images using computers. Many of their early results are found in their book Fractals [13] publish in 1977. Since then continual improvements have been made, so that today many pictures created from these fractals are almost indistinguishable from pictures taken of the physical world. There is a humorous story of Beniot Mandelbrot submitting fractal pictures of clouds for an article in Scientific American only to have them rejected for fear that readers would conclude that the pictures were of real clouds, see Color Plates 57-58 of 'realistic' looking landscapes.
 
Two predominate places where fractal landscapes are being cultivated, are projections for flight simulators and planetary simulations for movies. In the late 1970's Boeing Aircraft engineer Loren Carpenter started experimenting with fractals to create a terrain that closely resembled the ones seen by pilots.  Later on this work would inspire Carpenter, where he transferred to, and others from ILM (Industrial, Light and Magic's special effects department) [14] particularly Alain Fournier and Don Fussel to use fractals for the movie "Star Trek II: The Wrath of Kahn", to create the Genesis Planet see Color Plate 58. At first no one would undertake the task because of the complexities involved. Traditional Gaussian methods that Mandelbrot and Voss had used in their landscapes would easily have taken many months if not years of mainframe computer time to generate the amount of frames needed for a film. This was due in large part to the number of data sets that were needed to be calculated, each point needing to calculate an average distribution with respect to its surrounding neighbors.
 
The first thing the group at ILM [15] realized was they had to keep the calculations simple so they could easily and quickly calculate random mountainous displacements. This was accomplished by using a method of recursive subdivided triangles and then calculating only the portions needed to be displayed on the screen. Other technical problems also arose that needed to be overcome. There were problems where seams would appear between two subdivided region's edges [Figure 6.30], here heights had to be averaged out as to not interfere with the mountains continuity . Calculating the resolution needed for every region for each particular frame became an essential part in diminishing the calculation time. Factors of lighting and motion positioning also had to be resolved. In the end it all came together, Mandelbrot said of their work " [I] would have loved to follow their work as it evolved."
 
 
Next we will explore how you can create silhouette of mountains with FractaSketch.
 
      Creating Mountainous Outlines using FractaSketch.
 
Using techniques learned in Chapter 3, you can use FractaSketch to draw a 'seed' of connected line segments that mimics the outline of a mountain top. In Figure 6.26 we demonstrate one of an almost endless possibility you can use. By changing lengths and positions of segments you can create a whole range of mountainous outlines. Try it! Hint: a nice region to experiment with is in the 1.01-1.11 dimensional range using between 4-7 line segments linked together. After you are done creating your outline you can import them into a paint program and using dithering techniques produce some spectacular results, see Color Plate 59. Also by opening FractaSketch files Mount Fractal and Tempete you can see other fractal outlines formed from basic line components as seen in Figure 6.27.
 

 

Figure 6.26 FractaSketch seed and resulting outline.
 

Figure 6.27 FractaSketch's Mount Fractal and Tempete .
 

Now lets look at fractal techniques that produce mountainous surfaces.
 
Making mountains and clouds out of fractals.
 
The most common method to make fractal mountains is the use of recursive subdivided triangles [16] . This rather straight forward approach can produce quite satisfying results, see Figure 6.28, Figure 6.29 and Color Plates 60. In constructing mountains and clouds, it would be difficult to physically expand and contract triangles, so this type of process is generally restricted to computer calculations where images can easily be manipulated. An element of randomness is usually added in determining the amount a subdivision is transformed, this gives a created mountain a varied contour closer to a natural appearance. The greater the fluctuation the more jagged the image and corresponding fractal dimension. Coloration and shading also aid in adding realism to the image.
 

Figure 6.28 First stage for a simulated mountain.
 
 


Figure 6.29 Different stages of mountain construction.
 

Here is an example of what can happen if two independent segments come together without side height averaging.
 

Figure 6.30 Seams that form between fractal mountain segments.
 

Lets see how fractal mountains are made.
 
         • First a standard triangle is subdivided into 3 segments. For our construction we will use the midpoint of each segment. Divisions at other points of the line segments would also work, but the midsection approach allows for a more evenly distributed mountain surface.
 
         • Now the 3 line midpoints are connected with 3 line segments. This results in 4 equally shaped triangles.
 
         • Next the 3 midpoints are arbitrarily raised or lowered. This is done by changing the midpoints' locations, the displacement amount is determined by a Gaussian random multiplier proportional to a line segments length. This results in the transformation of the 4 triangles.
 
         • The subdivision process is then started all over again with each of our 4 triangles resulting into 16 new triangles. This iterative transformation can be carried out until the desired resolution is achieved.
 
As seen earlier with exactly self-similar fractals in Chapter 3, since larger components are made in the same way as the smaller components that comprise them, you will find similar structures at all levels of magnification. The amount of change relates to the mountains fractal dimension see Figure 6.31.
 
 

Figure 6.31 Fractal mountains with different dimensions.
 

Fractal planets can also be created with random displacement however instead of starting with a triangle, a sphere is used made up of triangles with colors that are assigned to varying heights, for example top regions might be assigned white for snow and lower regions green for vegetation. In addition to mountains, oceans can be made by choosing an arbitrary minimum height value, then any height that falls below that value is given the minimum height value with a corresponding ocean color. Realistic looking fractal mountains have dimensions that typically range from  2.1 to 2.5.
 

Figure 6.32 A fractal island in an imaginary ocean.
 

Clouds
 
Fractal clouds are formed using the same random procedure as mountains. The difference in construction is that instead of using the randomness to produce height differentials, randomness is used to produce the density of the cloud. Most cloud densities calculated in a plane have a dimension [17] of 1.37 < D < 1.41 and natural density of 2.50 < D < 2.75, see Color Plate 61 for example of fractal clouds.
 
 
Multifractals (top)
 
Multifractals look at the varying dimensions of a structures component parts. The amount of a particular dimension found of an object is referred to as its measured dimension. It is usually calculated as a percent of the complete overall structure.  For instance in Chapter 4 we saw how the state of California was divided in to 3 distinctive dimensional regions: the linear boarder, the sandy beaches and the rugged coastal shoreline. If a spectrum were taken of the state's boarders fractal dimension strong defined pikes would correspond to these three regions. Many other areas exhibit varying dimension in different regions. They include crystal growth patterns, rock formations, particle density in planetary rings, strange attractors, turbulence and disturbances in data transmission.
 
 
The properties of multi-fractals can also seen in Iteration Function Systems where the probability distribution of a region varies for different locations as seen in the IFS section in Chapter 7.
 

Figure 6.33 Different dimensional slope values for different ranges found in a multifractal.
 

In measuring fractals with varying dimension, it appears that in most cases different fractal dimensions seem to form in clustered regions without an even distribution.
 
Fractal Image Compression (top)
 
"There's only so much information in a spore that encodes a fern. So there's a limit to the elaborateness with which a fern could grow. It's not surprising that we can find equivalent succinct information to describe ferns. It would be surprising if it were otherwise." -Michael Barnsley, Chaos.: Making a New Science
 
 
Interest in data compression, most notably for images, has set off a fury to find schemes that offer the best compression ratios. Using existing compression techniques, ratios of 50:1 are typically considered the upper bound for images not to experience a considerable degradation in quality. Fractal transformation methods are being explored in hopes of raising compression ratios even further. One of the main persons involved in its development is Michael Barnsley.  In his book "Fractals Everywhere" published in 1988, Barnsley gives abstract discussions of possible methods, but unfortunately makes little references to describe specific results or algorithms. In 1989 he started a company called IFS compression to commercialize his earlier work.
 
The question being asked is, is such a system possible and if so is it feasible with given technology. At stake are billions of dollars, as companies [18] try to find the most effective means to transfer and store large amounts of visual and auditory data needed for movies and interactive conferencing.
 

Figure 6.34 Images that have been compressed to different levels.
 

Possible Fractal Compression Schemes.
 
Fractal compression schemes for images are being developed in different ways to exploit the fact that pictures inherently use similar patterns and colors throughout. One method incorporates the fact that similar objects or structure are generally found in the same picture at different scales. Take for example a picture of a forest with mountains in the distance. If you divide it up into component parts and keep only unique parts you could reduce the amount of space to store the information considerably. The sky might be stored as a light blue defused pattern that covers a certain section, a tree's information could be reduced to a given seed shape with: scale, height, density, limb formation and color all stored as a few data points similar to the way FractaSketch does it now, only in reverse and on a considerably more sophisticated scale. The mountains might be stored as a terrain with a given set of perimeters: scale, color, corresponding fractal dimension and contour, similar to the ways some fractal mountains are made now. In Figure 6.34 we use the still frame from an old black and white movie to illustrate some of the self-similar principles involve notice how similar  the three section pair are to each other, also see Color Plate 62 for another example of how pictures often exhibit a repeating pattern throughout.
 

Figure 6.35 A movie still from the 1918 Charlie Chaplin film A Dog's Life showing repeating patterns.
 

One of the main problems we have now in implementing many compression systems ( especially fractal compression ) is the amount of time and sophistication it takes a computer to extract the meaningful data need to form a picture.  In fractal compression  images often loose subtle detail due to the adaptation of other parts whose self-similarity to their own is not exact. In the future as machines get faster and algorithms are developed to make effective use of visual information considerable strides should be made in their implementation.
 
Fractals have come full circle, Mandelbrot had originally used fractal geometry to examine patterns filter out these patterns that produced errors in data communications. Today these techniques are being use through their encoding to compress data so it can be sent at increasingly higher amounts, it funny how things change.
 
Aggregate Growth: the Fractal of Crystals
 
Some fractals take on their fractal nature by growing into them.  These organic and inorganic structures grow with the objective of seeking out new territory --  to grow where they haven't gone before.  However, their growth is constrained, so they proceed selectively, sending out investigative, branching tendrils.  As this happens on all spatial scales, a self-similar nature evolves.  They have grown into fractals.
 
 
Immobile organisms, in their pursuit of food or sunlight, may grow into just such fractal forms.  Fungus, lichen, coral, seaweed, cell colonies, tree branches and plant roots offer themselves as good examples.  The neural structures of our brain and the avioli in our lungs are thinking and breathing examples of the results of fractal growth.  We also notice these sorts of fractal structures in the inanimate realm as tea stains a wet napkin, paint dissipates in water or frost forms on the windshield.  We may also have placed our hands on the "lightning ball" at the science museum and watched the play of the dendritic, spidery electric discharges along the inside of the sphere. Less agreeable, microscopic growth fractals float in our air and stop up our chimneys, under the pseudonyms of dust and soot. 
 

Figure 6.36 Neurological cells showing aggregate structure.
 

These everyday events have counterparts in scientific research and engineering.  A drop of paint disseminates in dipping water in much the same way that water disseminates when engineers use it to "flush out" oil deposits in shale.  One problem with this extraction process is the fractal nature of the interaction of miscible fluids, or ones that "mix up".  The "lightning ball effect" in the science museum is an electrical phenomenon that a physicist would call dielectric breakdown in an insulator, while frost on the windshield is but one example of anisotropic aggregation.  Aggregation is the process of suspended particles sticking to one another to form connected structures, while anisotropic aggregation, or crystallization,  is a more constrained aggregation in which particles connect only in favored lattice directions.  It is the study of these sorts of phenomena that has led to the theory of Diffusion Limited Aggregation or Laplacian fractals.
 

Figure 6.37 Growing aggregation pattern found in liquid bonding, © 1989 Bart Ebbinghaus and Bernt Wahl.
 
 

It is particles moving randomly and sticking to one another in an aggregative process that suggested the simple but surprisingly general model of Witten and Sander called Diffusion Limited Aggregation (DLA). In this model, we begin with a particle (or "seed") at the center of a disk.  At the boundary of the disk, a particle is released to begin a random walk.  It is not allowed to exit the disk and will eventually hit the seed in the middle.  At the point of contact, it sticks.  We then release another particle at the boundary and it finds its own random path to join the growing cluster.  This is repeated again...and again.  Figure 6.38 is an example of an aggregate structure grown on a computer by this technique.   The structure, though simply constructed, produces a striking resemblance to many natural objects, yet still has many enigmatic properties which we can now begin to observe. 
 

Figure 6.38 A  4700 particle diffusion limited aggregate.  The later particles added are printed darker to emphasize the growing tips. 
 

We can observe that there are few, very few, newly added particles near the center of the structure.   Intuitively, we realize that a released particle is almost certain to bump into the aggregate at its outermost parts as opposed to navigating down an increasingly narrowing "fjord" to get to its sticking destination.  This "screening" attribute of the model is manifested in the aforementioned growth processes as well. The relation between the DLA model and clustering particles is straightforward.  However, dielectric breakdown and the mixing of miscible fluids also favor growth from pointed ends, as will be seen below.  
 
Another essential  element of the DLA model is the presence of randomness. In aggregate clustering, this randomness is introduced by the Brownian motion of particles. Noise, or randomness, is also present in viscous fingering and dielectric breakdown in the form imperfections in the underlying materials. Noise is essential to the formation of branches at the tips. Without this, one would expect more symmetric formations.  This may explain why DLA structures are sensitive to anisotropies; with anisotropies, aggregates such as snowflakes develop more symmetrically. The regularity imposed on the particle connecting directions has reduced the amount of noise- variation involved. 
 
 
Aggregation has an "out to in" nature - particles move from outside to join the cluster - whereas dielectric breakdown and viscous fingering grow from "in to out".   For this reason, the DLA model might not be expected to mix theoretically with dielectric breakdown and viscous fingering.

 

Figure 6.39 An alternative aggregate growth.
 
 
 
Although DLA is a rather simple in its description and construction, it is not well understood why this model forms a fractal.  There is no mathematical theory by which we may predict what will happen should we "tweak" certain parameters.   As is not unusual in the realms of fractal theory and mathematics in general, a simple model gives us a complex and fascinating set of behaviors that eludes theoretical understanding.  However, having good model of physical phenomena is useful in itself.  The calculation of its fractal dimension alone gives us interesting applications of fractal theory toward predicting the behaviors of aggregates such as soot or dust or solids formed from vapor.  An example of this is the above calculation of the average density with respect to the size of the aggregate.   There are, however, more surprising applications.
 
A fractal aggregate, like any structure, will interact with external forces.  Surprisingly, much of how it interacts is dependent on its Hausdorff dimension.  Questions we might ask are how a solid aggregate affects fluid flow, or how aggregates interact with one another in solution. We might also ask what we will see if we look at an aggregate or through a group of aggregates.
 
As mentioned, one indispensable element of DLA is the random movement of the aggregating particles.  Had the particles been sent off in straight line paths, a very dense globular object would have appeared, the singly minded particles not being particularly attracted to tips of the forming structure.  The difference between these particle paths may also be set in the realm of fractal theory.  The straight line path gives us physical and Hausdorff dimension of .  The path of a particle randomly walking in the plane has the space-filling, Hausdorff dimension .
 
If two fractals of radius r are placed on top of each other, the number of intersections ( ) may be found in terms of the fractal dimensions, and , and the dimension of the physical space, .
 
 
With this in mind, we may answer one of our questions.   Should a "light ray" from a point inside the aggregate be sent out,  would it intersect the aggregate again?  This would be equivalent to not being able to see this point.  In the case of the fractals we have been looking at, the number of intersections is proportional to .  Thus if we were restricted to the two-dimensional plane, we would expect our "view" of the inner particles to be blocked, although the larger the aggregate, the smaller the density of these blockings.  More interestingly, in three-dimensions when certain related kinds of clusters have sufficiently small Hausdorff dimension between  and , the negativeness of the exponent (it is between  and ) tells us that as  increases in size, we may expect to see an increasing percentage of the aggregate, as intersections decrease with size.
 
However, should a diffusing particle be unleashed from such a structure and begin a random path, as opposed to a straight line path, our exponent is one greater and positive, so we may expect the particle to be caught up in the aggregate again.
 
Since particles "rediffused" from a DLA are caught up again, we may use this to create  a variant DLA model.   Rediffusion allows particles to move further into the structure.  This sort of activity inhibits tip growth and undermines the fractal nature of DLA.   DLA with rediffusion models the interaction of two fluids with relatively high surface tension, as greater surface tension creates the rounding off of tip structures.   We may apply this thinking to our oil extraction process, increasing the efficiency of flushing oil out from porous rock by adding solvents to water to increase the surface tension between the two fluids and thereby dampen the fractal fingering see Figure 6.39.
 
Using the relation above, we may address some of our other questions about our fractal aggregates.   If in the relation above the sum of  and  is greater than , or equivalently,  the exponent is positive, two fractals are said to be mutually opaque.  (One will obscure the other from view.)  When aggregates are mutually opaque ( ), they tend to not interpenetrate with one another in dilute solution.  Because of this, they actually interact like spheres.  Therefore, osmotic pressure of such a dilute solution in contact with a semi-permeable membrane will be just like that of a dilute solution of solid objects.  Similarly, an aggregate affects fluid flow as if it were a hard sphere and therefore aggregates in solution affect the viscosity of the fluid as if they were hard spheres.  These behaviors are not obvious, as aggregates have small average density.
 
It is worthwhile to mention two other areas of study of fractal aggregate  structures.  One is the study of their multifractal nature.  As seen above, the probability that a random particle will land on an area of the aggregate gets larger as we move towards the "tips".  By taking sets of these  probabilities and decomposing the fractal into the sets of points associated with these sets of probabilities, we create several fractals,  getting a range of fractal dimensions associated with the original fractal.  Other scaling exponents, such as the Lipschitz-Holder exponent  may be calculated from the probability distributions associated with these subfractals.
 
The other area of study, which actually models dust and soot much better, is the more natural "cluster to cluster" model, where aggregates form into small clusters that then cluster together.  This model is actually much better understood theoretically, the finite limiting nature of the fractal dimensions of the cluster to cluster aggregates in higher dimensions coming in to play. Look around you see how many of these patterns can be seen in our surroundings.
 

Figure 6.40 Aggregate clustering.
 

Now lets look at another form of fractal formation, one where points are mapped to regions, whose shape only becomes apparent after a certain level of density is reached.
 
 
Iterated Function Systems, What you See Is Not Always What You Get
 
"In the well there is a clear, cold spring
From which one can drink"
-- I Ching
 
Michael Barnsley (1946 -    ), professor of Mathematics at the Georgia Institute of Technology, pioneered the use of sets of very simple equations to generate fractals.  With the techniques and prinicples he developed, it is possible to model objects in nature, as well as create abstract images of great complexity and beauty.  We are going to study fractals that are generated by these sets of equations, called Iterated Function Systems (or IFS).  The simplicity of these equations allows us to control the way the fractals look, to shape them in ways we intend.  Because of this simplicity, IFS is the branch of fractal geometry which lends itself most easily to commercial graphic arts, and also has shown promise in video technology as a way of compressing images (reducing the amount of data necessary to store the image).  Other branches of fractal geometry are oriented more towards exploration, but IFS fractals are useful when you know how you want the fractal to look.  Studying IFS fractals develops our ability to see an object as a structure made up of little copies of itself.  Moreover, because IFS fractals are simple, we can be in complete control of the process as we produce these visually striking objects.
 
Here's Looking at Fractals, Kid
 
Let's look at some interesting fractals that demonstrate self-similarity.  The first fractal, shown in Figure .1, is called a "Dragon" fractal because it resembles a dragon.  Can you see how the two silhouettes of the fractal in the lower right-hand corner are miniature versions of the silhouette in the center?  Can you see how all the silhouettes are shapes contained in the fractal itself? If you look closely, you will see that the fractal is nothing but miniature copies of itself, repeated  over and over, down to the limit of the dot size of the picture.  Figure .2 shows two more dragon fractals with their respective silhouettes.  There are many related dragon fractals -- variations on a theme.  
 

Figure 6.41 A "Dragon" fractal and silhouettes.

Figure 6.42 Two "Dragons" and and silhouettes.  The silhouettes are smaller versions of the whole fractals.
 

The silhouettes of Figure 6.42 show the two largest "copies" of each fractal.  Each of these large silhoutettes is generated by transforming the original image.  These dragon fractals are generated by two transformations, because each is made up, fundamentally, by two smaller versions of itself.  Compare the previous fractals with one generated by three transformations, shown in Figure 6.43.  With this fractal, we can't divide the picture into two smaller versions of itself -- the only way to subdivide it and retain the similarities is to divide it into three subunits.   
 

Figure 6.43 A fractal of three transformations.
 


Figure 6.44 The Sierpinski Triangle.
 

The most famous fractal which contains this "three-way" nature is the Sierpinski Triangle, shown in Figure 6.44.  Notice that the upper triangle is a shrunken version of the whole.  The lower left and lower right triangle are shrunken versions as well.  The Sierpinski Triangle is a good example of self-similarity because we can easily see how the whole triangle contains miniature versions of itself.
 
We've looked at how fractals are generated by a number of transformations.  What are mathematical transformations and how do they work?  The transformations we'll discuss are called linear transformations , and they work in the following ways:
 
    translation:  moving around an image;
   scaling:  shrinking or expanding an image;
    rotation:  spinning the image around;
    skewing:  tilting the image. 
 

Figure 6.45 Translation.

Figure 6.46 Scaling.

Figure 6.47 Rotation.

Figure 6.48 Skewing.
 


Figure 6.49 This image (top-left), is skewed (top-right), then rotated (bottom-left), then scaled (bottom-right).
 

Examples of translation, scaling, rotation, and skewing are  shown in Figure 6.45 - 6.48.  Different combinations of these four building blocks can be used sequentially to take one image and turn it into another, and this sequence is a linear transformation also.  For example, we can take Figure 6.49 top-left, skew it so that it looks like top-right, rotate it so that it looks like bottom-left, and then scale it so it looks like bottom-right.   Combined, all of these "elementary" transformations become a single linear transformation, and conversely, any transformation that can broken down into these elementary transformations is a linear transformation also.  If the result of a linear transformation is smaller than the original image we call the transformation contractive.  Transformations which are contractive turn out to be the most useful in the context we will be using them within.  The reason for this will be explained later. 
 
We've seen that there is a class of fractals which is made entirely of miniature copies of themselves.  We've also seen transformations which are capable of turning one image into another.   These two concepts can be joined together by understanding how systems of contractive linear transformations can be used to generate the sorts of fractals we've been examining.  Let's relate this to the fractal of Figure .1 to clarify what this means.
 

Figure 6.50 A "Dragon".  If the fractal is "fed" into one of its two transforms the
result is the highlighted area of the silhouette.
 
Figure .6.50 highlights one of the two different  transformations which create the Dragon fractal.  As you can see,when the whole of the fractal is "fed" into the transformation, the transformation scales , translates, and rotates the fractal into the upper half of itself.  Can you see the sort of transformation that would produce the lower half of the fractal?  
 

Figure 6.51  The Sierpinski Triangle and its three transformations.
 

Figure  6.51 maps the three transformations that create the Sierpinski Triangle.   Notice that the only elementary operations involved are scaling and translation.  The combination of all of these transformations is what gives the fractal its shape.  This is what the 'S' in IFS means -- it is a system of transformations. 
 
Alice Transformed by the Looking Glass
 
            "'She's all right again now,' said the Red Queen. 
'Do you know Languages?  What's the French for fiddle-dee-dee?'
            'Fiddle-dee-dee's not English,' Alice replied gravely.
            'Whoever said it was?' said the Red Queen.
                                                -- Lewis Carroll
 
Self-similarity of an image is one of the things that make us point at something and call it a fractal.  Self-similarity can be obvious, as it is in Sierpinski's Triangle,  or it can be subtle and easy to miss, as in the traditional Mandelbrot Set, where one has to peer into the fibers of the set microscopically to see distorted versions of the set itself.  What circumstances will likely generate an obviously or an unobviously self-similar fractal?  Although by no means mathematically rigorous, as a general rule the simplicity or complexity of the math determines the simplicity or complexity of the self-similarity. 
 
We can determine how similar two pictures are by comparing associated points and their neighborhoods.  You'll recall that a point has an  and a  coordinate, , and that the order that they appear is important;  the point  is not the same as the point .  Think of a picture as being a collection of points, or a set  of points mapped out on a grid.  We can see how two pictures are similar by comparing the individual points contained within each. 
 
A transformation is a rule, or set of rules, which takes the picture and "maps" it to another picture, called its image.   Let's say that for each point in the picture there is one and only one corresponding point in the image.  If that's true, the transformation is called one-to-one.  The inverse of the transformation works in the opposite direction -- the inverse maps a given point in the image to the corresponding point or points in the picture itself.  If there is one and only one matching point in the picture, the transformation is called onto.
 
Let's assume that our transformation, call it , is both one-to-one and onto.  We pick two random points in the original picture,  and , and find the corresponding points in the image,  =  and = .  To see, mathematically if the picture and its transformed image are similar, we need to verify that the neighborhood  of points around  is like the neighborhood around , and that all paths between  and  are like the paths between  and .  With a black and white picture we can easily see which points are part of the picture's set of points; if a point in the picture is black, that point is contained in the picture or image; if a point is white, that point is outside the picture or image.   Next we can use the following tests  to determine the similarities between the neighborhoods:
 

Figure 6.52  Two sets of points.  We can compare points within each set to see if the sets are related.
 
    Look at Figure 6.52.  All of the points around  which you can reach without going outside the picture's set are 's island.  If you put the point of your pencil down on , any point that you can get to without picking up the pencil and without going into the white space is on 's island.  Can you see that all the points in 's island match all the points in 's island,  and that there no points in 's island that don't correspond to points in 's island? 
 
    Using a ruler, find the point on 's island that is the furthest from .  Is the point on 's island which corresponds to that point the furthest point away from ?
 
    If there is a path between  and  without leaving the set, is there also a path between  and ?  Is there more than one path?  Is the number of paths between the pairs of points the same? 
 
When we talk about "linear" transformations what we mean is that there is a simple proportional relationship between the points of the picture and the resultant points in the image.  Linear transformations are the simplest transformations that preserve similarity (actually they preserve something slightly more general called affinity.  Strictly speaking, a transformation preserves similarity only if angles are preserved, but here we'll use the term similarity instead of affinity since it's a more familiar word).  Let's call the plane that the picture resides on the source plane, and the plane the resultant image resides on thedestination  plane.  An example of a linear transformation would be a photocopier set to "reduce".  The copy is a smaller version of the original.  This type of transformation is called a "scaling" transformation, mentioned earlier. 
Let's look at the mathematics of linear transformations in more detail.  If a point  on the source plane is a distance  from another point  such that , the transformed points on the destination plane,  and , 'will be  in the  direction apart and  apart in the  direction, where  and  are constants.  In mathematical terms, .
The elementary linear transformations, scalings, rotations, translations, and skews, can be combined together to produce a pair of equations which look like this:
 
            ,
and     , where  and  are all constants.
 
Individually, a scaling transformation looks like this:

            ,
and     ,
 
a rotation like this:
 
            ,
and     , where is the angle of rotation (about the origin),
 
a translation like this:
 
            ,
and     ,
 
and a skew like this (this example skews in the y direction):
 
            ,
and     ,
 
The resultant points of the image are related to the original points of the picture in simple ways.  If the absolute values of the constants a,b,c and d are all between 0 and 1, then this transformation is called contractive.  This means that no matter what else the transformation does to a set of points (a picture in our case) on the source plane, it must scrunch the set up so that the resultant set (the image) on the destination plane is smaller than the original. 
 
Now we'll do a brief summation:  an IFS is a group of contractive linear transformations that generates a fractal.  It might be an uninteresting fractal, like a line segment, or it might be interesting to look at, like some of the Dragon fractals shown earlier.  The fractal generated by an IFS is the set of points which, when passed through the transformation, remains unchanged.  To give a simple example of sets that pass through a transformation unchanged, I'll use a non-linear function, , as an example transformation.  If I define a set , the transformed set is .   The set transformed is the same set.  In contrast, the set , when transformed, , is not the same as the original set.  Because this function isn't linear and it isn't, in general, contractive, we're not going to look at it further in relationship to fractals, but it's interesting to note that there is a set for this function which remains unchanged after being transformed (are there any other sets like that for this function?)
 
Likewise, there is usually an (x,y) pair that passes through each individual linear transformation of an IFS unchanged.  This point is called a fixed point .  If there isn't a point which remains fixed through a transformation it's unlikely it will be contractive.
 
 
For a given transformation where a,b,c,d,e and f are all defined, the fixed point is given by:

     
 
     
 
 
Under what condtitions do the fixed-point equations give us "problematic" answers?  When the denominator is zero, of course, and under most circumstances this won't happen, because both terms of the denominator would have to be zero, or they would have to sum to zero.  So there are three cases:       
                        1)  and ;
                        2)  and ;
                        3) .
 
Cases 1 and 2 won't ever happen because we don't allow a or e to be 1.  The only possibility occurs with case 3.  It turns out that there usually isn't a fixed point at all when that happens, unless it's also true that f = -c.  When f does equal -c then you have more than one fixed point -- all the points on an entire line remain fixed.  This is easiest to see with a particular transformation
 
      ;
      .  (-0 = +0 for all you particularly cranky people.) 
 
Any point where  will work here (try it!).  This is generally not a problem, though, and a transformation whose "fixed point" is actually a line is as valid a transformation as one whose fixed point really is a point. 
Figure 6.53 How to determine the Fixed Point of a Linear Transform.
 
In the vast majority of cases a linear, contractive transformation has a single fixed point.  It's also usually true that if you have two transformations,  and , you will have a pair of points  and  such that  and .  This is interesting becuase  and .  Add another transformation and you get triplets of points that "oscillate" (alternate) among themselves.  These fixed and oscillating points turn out to be points within the fractal of the IFS, and, because of their mathematical properties they are among the most important points.  Later we will see that these fixed and oscillating points have "addresses" which are also interesting.  For now you can think of these points as the "tent stakes" of the fractal, because they are the points which "pull" the fractal away from being a single point, and they are also the points which are most responsible for giving the fractal its shape.  Figure .14 shows the fixed points of one possible set of transformations of Sierpinski's Triangle.   Not all fractals' fixed points are so obviously important, but the picture does help to illustrate this concept. 
 

Figure 6.54 Possible "fixed points" of the Sierpinski Triangle.
 

We've shown that given a particular transformation there is often associated with it a set of points (usually a single point) which remains unchanged when passed through the transformation.  For given sets of transformations, there are certain sets of points which seem to remain constant when transformed, if considered as a whole.  To illustrate this, we'll to borrow an analogy from Heinz-Otto Peitgen (1945 -     ).  German mathemetician well known for his spectacular images of the Mandelbrot Set] and change it a little.  Lets say you have a photocopier set to reduce by 50% and a piece of paper with a big black triangle on it [Figure 6.55].  The first thing you do is make three copies of the piece of paper, cut out the resultant black triangles, and arrange them on a new sheet of paper so that you have a triangle the same size as the original but with a hole in the middle [Figure 6.56].  Then you paste them down in those locations.  What you have done mathematically to the original triangle is a scaling and three different translations.  Then you take the resultant image and do the whole process over, so that the second time around you get a picture similar to Figure 6.57.  As you continue to repeat these steps, you will find that the image gets closer and closer to a Sierpinski Triangle.
 

Figure 6.55 one caption, all three images on one page

Figure 6.56

Figure 6.57  Using a photocopier to produce a Sierpinski Triangle.
 

What would happen if you had started out with that Sierpinski Triangle, instead of a big black triangle?  After running through this process once -- or any number of times for that matter -- you would find that the resultant image is exactly the same as the original.  The fractal is the image which remains unchanged when run through a linear, contractive IFS.  For any group of contractive transformations on the plane, there is one and only one associated fractal set.
 
If all of the transformations of an IFS are contractive, then any point on the plane, if passed through any of the transforms of the IFS repeatedly, will eventually approach (be arbitrarily close to) the fractal.  This fact has led people to associate fractals with a concept from Dynamics called "Attractors".
 

Figure 6.58 - "Attractors -- using a ball bearing in a bowl as an example."
 

Attractors were originally associated with physical systems, such as pendulums or ball bearings rolling inside a bowl.  A ball bearing rolling in a half-spherical bowl is an example [Figure 6.58].  Think about the height (z coordinate) and the velocity of the ball.  Regardless of where the ball was intiially and in what direction and speed it was pushed , it will  end up at the bottom of the bowl at height Z0 and velocity 0.  This point (Z0,0) is called an attractor of the system because every possible path of the ball ends up here eventually. 
 

Figure 6.59 Ball bearing in a bowl with bent bottom - a system with two Attractors.
 

It's possible to have more than one attractor.  Imagine that the bowl gets bent so that there were two different low spots in the bottom so that the ball could settle down in either one of them.   This bowl then has two attractors [Figure 6.59].  In systems that are frictionless or those that include feedback, the attractor might not be a point at all.  It might be a state of oscillation at a stable frequency, such as a frictionless pendulum swinging in a vacuum.  Sometimes, when the equations which describe the physical system are non-linear, it's possible that the attractor is a strange multidimensional non-repeating, non-intersecting path called a Strange Attractor  (See Chapter 2 and  Chapter 7). 
 
The set of points that passes through an IFS unchanged is commonly called the "attractor" of the IFS.  Points not in the attractor fall towards it, and points within it remain within it. 
 
How to Go From Here to There Using Iterated Function Systems
 
How do we go about discovering the constants required to produce a particular transformation?  There are six constants we need to get values for. To find out the values of these constants, we need to know how three points on the source plane map to three points on the destination plane.  See Figure 6.60.  Given three source and three destination points,  you solve for the unknown a,b,c,d,e, and f.  We 'll illustrate with one of the three transformations of the Sierpinski Triangle of Figure 6.61, as shown in Figure 6.62. 
 

Figure 6.60 Mapping a "Source" triangle onto a "Destination" triangle.

Figure 6.61 Sierpinski's Triangle with coordinates.

Figure 6.62 Source and Destination triangles of one of the Sierpinski Triangle transformations.
 

Given the coordinate system as shown, the three source points are located at
       and
and the destination points are at
       and ;
 
so that the six equations generated are
 
Solving for the constants we get the two equations
 
            ,
and     .
 
In this transformation, b, c, and d are all 0 because the transformation is a simple scaling and translation.  The transformation for Figure 6.63 is a little more complex.  Imagine that the destination triangle of the last example was rotated counterclockwise about the point .  Figure 6.64 shows what happens to the resultant fractal, assuming the other two transformations remain the same.  Below are the transformation equations:
 
            ,
and     .
 

Figure 6.63 A transformation similar to the previous example but which includes a rotation.

Figure 6.64 A fractal similar to the Sierpinski Triangle which uses a transformation with rotation.
 

Let's look at the previous equations a little.  In the first example, and  are dependent only on  and , respectively.  In the second example and  are dependent only on  and .  The second transformation causes a rotation of ninety degrees, so a movement to the right on the source plane should cause a movement downward on the destination plane.
 
In the Sierpinski Triangle, there are three transformations (refer to Figure 6.51).  Transformation 1 maps the entire image into the uppermost triangle; Transformation 2 maps the entire image into the lower left triangle,and Transformation 3 maps the entire image into the lower right-hand triangle.  An IFS is a set of rules dictating how one can get from one point to another and also what points are possible to reach.  In the case of the Sierpinski Triangle, if we're at any point on the fractal and we want to get to the uppermost vertex point, we simply use Transformation 1 over and over, each time covering half the remaining distance to that vertex.  Note that we'll never actually get there unless we were there to begin with, but we'll get infinitely close as the number of times we use Transformation 1 approaches infinity.  Since the "road" to the top vertex is to use Transformation 1 repeatedly, we can call its address "1111111…".   The bottom left vertex is similarly "2222…".  The other points are various combinations of the three transformations.
 
In our photocopier analogy, the first time we made our copies and pasted the reduced versions together, we were establishing the first number in the address:  points in the upper triangle began with a "1",  points in the lower left triangle began with address "2", and points in the lower right triangle began with address "3".  When we repeated our photocopier process for the second time, we established the next digit of the address.  Since at this point there are nine combinations possible, we'll focus just on the top triangle.  For a point to be in the uppermost little triangle of the uppermost triangle, the first transformation was taken twice.  For a point to be in the lower left little triangle of the upper triangle, Transformation 2 was taken first and then Transformation 1 was taken.  Note that this means that the leftmost number of the address is the number of the last transformation taken, not the first.  This is counterintuitive to Westerners, who read from left to right.  If this process is done over and over ad infinitum you can see that every address (series of transformations) will be assigned a point (or vice versa).  This form of calculating the fractal of an IFS is generally called the "Deterministic Algorithm". 
 

Figure 6.65 Using an "Apple" to generate Sierpinski's Triangle.  The more you have, the smaller they become
 

In Figure 6.65 we can see the results of the deterministic algorithm applied to a picture of an "Apple" icon.  The transformations are those for a Sierpinski Triangle.  At each new iteration of the system the image becomes closer and closer to that of the Sierpinski Triangle, and the initial picture becomes less and less important as its images shrink.  We'll run the picture of a dragon fractal through these same transformations and see what happens [Figure .26].  Once again, the Sierpinski Triangle prevails, and the initial picture dwindles away. 
 

Figure 6.66 If we run the wrong fractal through the transformation repeatedly, we still get the right fractal in the end.
 

Consider Figure 6.65 b.  The three images of the apple correspond to the first number of an address.  Figure 6.65 c shows nine apples , corresponding to the first two numbers of the address; Figure 6.65 d, with 27 apples, shows the first three numbers, etc.  Any infinite series of the numbers 1,2 and 3 is the address of a point in the Sierpinski Triangle, and this address also describes what transformations need to be taken, and in what order.  The highlighted apple of Figure 6.67 corresponds to addresses beginning with "123…"  Do you see why? 
 

Figure 6.67 The "Apple" which corresponds to address "123".
 

Previously we discussed fixed and oscillating points of linear transformations.  Now we'll reexamine them in light of their addresses.  The address of the fixed point of one of the transformations of a fractal, Transformation N, will be "NNNNNN…".  For oscillating points, those points for which two or more transformations end up producing the same set of points, the addresses of these points will end up looking like "NMNMNMNM…" or MNMNMNMN…".   or "MNPMNPMNP…"  When we want to know what the oscillating points of two transformations are, we can pick any point and then use the two transformations alternatively, and eventually the sequence will converge to those two points. 
 
The Deterministic Algorithm is useful to help illustrate what a linear fractal is, but in practice linear fractals are rarely drawn that way because takes a long time to generate a good image.  There is another way that's much faster to generate the fractal of an IFS, and it's called the "Random Iteration Algorithm" or "Chaos Game".  This method depends on two fundamental principles of IFS:  1) that if any one of the transformations of an IFS is used on a point of its associated fractal, the resultant transformed point is also in the fractal; and 2) because all of the transformations are contractive, even if the point is not in the fractal,  the resultant point will be "closer than" the original point to the fractal.  Hence, if we choose a point, any point, regardless of whether it's in the fractal or not, and then randomly pick a transformation, get the resultant point, and then randomly pick another transformation, get the resultant point, and so on, eventually we will get arbitrarily close to the fractal.   Not only that, but by plotting the resultant points along the way while we are doing this forever we will generate a close approximation of the fractal.  We say a "close" approximation because points are very (infinitely) small and our eyes and pencils and computer screen pixels are very much bigger, and if the first point we chose wasn't exactly on the fractal then no subsequent points will be either (but the longer you play the chaos game the closer you get). 
 
Let's illustrate the Chaos Game with an experiment:
 
Step 1:  draw three dots on a piece of paper in a triangle, widely spread apart, and label them P1, P2, and P3.  Choose P1 as your "current" point. 
 
Step 2:  Roll one die (half of a pair of dice).  Divide the number of pips showing by two and round up to the nearest integer. 
 
Step 3:  Move halfway betwen your  current point and the point asociated with the number you just generated.  Put a dot at that spot.  (If that number is 1, then put a dot on P1, because halfway between P1 and P1 is P1.  The upper left section of Figure 6.67 shows the result if number you generated was 2). 
 
Step 4:  Go back to step 2, using the dot you just drew is your new current point. 
Repeat steps 2 and 3 as long as you want.
 
Continue throwing the die, dividing by two and rounding up, moving halfway from your current dot to that number's point.  The upper right picture of Figure 6.67 shows one possible result after 5 rolls, the lower left shows the result after 100 rolls, and the lower right shows the result after 5000 rolls.  These pictures, of course,were generated by a Mac, using the Random() function in place of the die.  Notice that even though the choices as to where to place the dots were made randomly, the end result was the orderly Sierpinski Triangle.
 
There is another way to generate linear fractal images, called the "Escape Time Algorithm.  The basic idea is that instead of using the transforms of the IFS, you instead utilize their inverses.  By using the inverse of a transform, you can find out what point of the source plane generated a particular point in the destination plane.  If the destination point was in the fractal, the source point will be also.  If not, then the farther away the destination point was from the fractal, the further the source point will be from the fractal.  If we see how fast points in the destination plane "fly away" from the fractal we can separate the point on or near the fractal (which fly away slowly) and points far away (which "escape" quickly). 
Color Plate 63 was generated by this method.
 
Figure 6.68 The "Escape Time" algorithm explained.
 
The Collage Theorem, Self-Replication
 
This Sierpinski Triangle is all well and good, but you might be asking why bother when the Mandelbrot Set is so much prettier and doesn't make you think so hard.  The answer is that IFS generated fractals are the most easily designed -- it's not that hard to know what you want to make beforehand and generate a fractal that matches that idea.  The reason that this is so is explained by (Michael Barnsley's) "Collage Theorem" and it says that if you want to generate a particular picture, all you have to do is come up with a bunch of transformations such that, when you take the transformed images of the picture and combine them together (in other words take the mathematical union of the images), you end up with the picture (or something very similar).  If the union of the images is close but not the same as the picture you want, then the resulting image will be close but not quite the same.  The idea of "close" is related to the Hausdorff Distance between two sets, and this is described more fully later.  For now, to illustrate this idea of better and worse "collages" we will examine another unglamorous fractal (a triangle, and this time it's so dull it doesn't even have any holes in it), and then expand upon the idea to discover that right next door to this dull fractal are a number of very interesting ones. 
 
Lets say that we like to listen to elevator music all day and also that we have the burning desire to create a fractal right triangle--a triangle that has one 90 degree corner.   From geometry you might remember that you can split a right triangle so that the pieces are smaller copies of the original: 
 

Figure 6.69 Splitting a right triangle into two similar right triangles."
 

The transformations that turn the big triangle into the little triangles are linear transformations, as all they involve are scaling, rotation, and translation.  If we calculate these transformations exactly, we will end up with a fractal triangle.
 

Figure 6.70 If the two destination triangles don't quite cover the source triangle we get an interesting fractal.
 

What happens if we're a little sloppy and the transformations that we calculate don't quite meet in the middle?  See Figure 6.70.   This looks much more interesting, but it illustrates that if you want to model something with fractals, you need to make sure that the little copies of the original image produced by the transformations cover the image as closely as possible.  Otherwise the fractal you were trying to make might come out much different than you were expecting. 
 
Included are some other interesting deviations from the triangle fractal.  Can you visualize the transformations that would create these fractals?

 


Figures 6.71 - 6.73 More variations on a triangle fractal of two transformations.
 


Figure 6.74 A fractal maze.
 

Triangles aren't the only simple shapes that are self-similar.  Squares are also.  If you divide a square into 4 equal pieces, each is a smaller square.  If you carefully choose where to leave empty space and where the entrance to the outside world is oriented for each of these four transformations, you can create a fractal maze like Figure 6.74.  This is an interesting fractal in that the passages get smaller and smaller as one progresses inward.   
 

Figure 6.75 Barnsley's fractal fern.
 


Figure 6.76 A railroad track fractal.
 

If you become good at seeing how to create something by combining little distorted copies of itself, you'll be able to design fractals yourself.  The famous fractal fern created by Michael Barnsley is shown in Figure 6.75.  This is one of the more startling examples of IFS fractals, as it closely resembles nature.  Let's look at this fractal and see what the underlying linear transformations are.  Look at the "Railroad Tracks" fractal Figure 6.76 and notice that the transformation derived from the two triangles next to it (there are two triangles there -- if that's not obvious think about it!!) causes the railroad ties to disappear into the distance.  This principle is what causes the fern, as it gets closer to the top, to diminish in size and have more and more branches.  The exact source and destination triangles are shown in Figure 6.77.  There are three other transformations, the destination triangles of which are shown in Figure 6.78.  One transformation makes the stem, one makes a branch to the right, and the other makes a branch on the left. 
 

Figure 6.77 Source and Destination triangles of of the "railroad track" transformation of the fern.

Figure 6.78 The other Destination triangles of the the fern.
 

The stem is of particular interest.  It is created by taking the entire image of the fractal and smashing it so that it's a straight line.  This is a perfectly legitimate linear transformation.  It's a transformation that completely ignores where the current point is located horizontally and utilizes only its vertical position.  In this way you can create thin lines and curves, which at times can be very useful.  It's possible to create a spiral using the principle of the railroad tracks together with the principle of the stem, as in Figure 6.79. 
 

Figure 6.79 A spiral fractal.
 

Weighted Averaging for Iterated Function Systems
 
Particular transforms of an IFS may be "more important" than others, and now we'll look at how that changes the images produced by the "Random Walk" method.  Before we do that we have to look at what we mean by the contractivity of a transform by looking at distances between points.  The old Pythagorean formula for distance,
 
        for two points  and ,
 
is a valid way of measuring distance, as are
 
      , or
      .
 
In the mathematical field of topology, the idea of distance and the way it's measured, called a metric, must follow four rules:

1) A distance can never be negative;
2) A distance can only be zero if the things we're measuring the distance between are the same;
3) The distance gotten by measuring from one thing to the other thing must be the same as the distance measured from the second thing to the first;
4) If you've got three things, x, y, and z, and distances , , and  between them, then .  This last rule is called the triangle inequality. 
 
It may be of interest to note that there is a particularly plain metric, called the Trivial Metric, which obeys all of the above rules.  The rule for the trivial metric is
 
            D = 1 if the two things are different, and
            D = 0 if the two things are the same. 
 
With this notion of distance let's apply it to contractive transformations to get a quantitatave measure of how contractive a particular transformation is.  Lets call our transformation  and define two points  and .  If there is a number c, between 0 and 1, such that
 
            ,
 
then the transformation is contractive.  The smallest value of c for which the above statement is always true is called the "contractivity factor" of function , and this contractivity factor is exactly what we want to use to quantify the contractivity of transformations.  It is also roughly equivalent to the Reducing Factor of our photocopier (expressed as a percentage in that case). 
 
What happens if there is no c such that ?  This means that the transformation is not contractive.  If you try to use a transformation that isn't contractive in your IFS it is somewhat the same thing as setting your photocopier to "Enlarge" and repeatedly blowing up the image.  The resultant image never settles down into a fractal; on the contrary, things keep expanding and if you continue the process forever you have an infinitely large image. 
 
The contractivity factor of a transformation is particularly important when you're using the Chaos Game to to generate fractals.  The reason for this is that the transformations which have the smallest contractivity factor (in other words, reduce the image the most) get the job of drawing their part of the image done the fastest, because the area that they are drawing into is the smallest.  It's often advantageous, when working with an IFS whose transformations have widely varying contractivity transformations, to modify the process of randomly picking transformations so that the transformations with the largest contractivity factors get picked more often than those with the smallest.  This helps to balance the plotting process out so that the fractal gets drawn evenly. 
 
You can also make good use of modifying the random process to favor certain transformations to emphasize parts of the fractal.  This is called weighted averaging. To illustrate this let's look at the Square Fractal (four transformations), first without any preference (or weighting) for which transformation to take at a time [Figure 6.80].

 

Figure 6.80 A "square" fractal.
 
If we choose the diagonal transformations twice as often, we get Figure 6.81.
 

Figure 6.81 The square fractal, emphasizing the diagonal transformations.
 

If we choose the upper left 4 times as often as the lower right, and the other two diagonals twice as often as the lower right, we get Figure 6.82:
 

Figure 6.82 The square fractal, emphasing left and top transformations.
 

Applying this idea to an interesting fractal, a "dragon" fractal, first without any preference as to which of the two transformations is chosen [Figure 6.83]:
 

Figure 6.83 A dragon fractal without weights.
 

And then choosing the "lower" transformation twice as often [Figure 6.84]:
 

Figure 6.84 Emphasizing the the lower transformation of the dragon.
 

Done cleverly, in the right situations, one can simulate shading as if there was a light source near the fractal.  It can also bring out details, in other situations, when one is using a computer that supports grey-scale or color, that would remain unaccentuated if wieghts were not used. 
 
More about distance and collages
 
We've generalized the idea of distance and introduced the idea of metrics, so now we'll introduce the idea of the distance between a point and a set, as this will help in understanding the Collage Theorem.  The distance between a point and a set is defined to be the smallest distance between the point and any point in the set [Figure 6.85].  This is equivalent to putting one point of a compass on the point in question and increasing the distance between the compass points until the compass touches a point in the set.  The distance between the two compass points is the distance between the point and the set. 
 

Figure 6.85 The distance between a point and a set.
 


Figure 6.86 The distance between two sets.
 

The distance between two sets requires a little more thought.  The two sets might have a great many points in common and differ slightly only at a few points.  The Hausdorff distance is a clever way of thinking about this such that a quantitative distance can be found that conforms to all of the rules of metrics stated earlier.   Call the two sets Set A and Set B.  Find the point in Set A that's the furthest from any point in Set B and find the distance between that point and Set B.  Call that point DA.  Then find the point in Set B that's furthest from Set A and find the distance between that point and Set A.  Call that distance DB.  Compare DA and DB, and whichever is larger, call that the Hausdorff distance [Figure 6.86]. 
 
To paraphrase the Collage Theorem, it states that as the Hausdorff distance between the unions of all the images of the picture you're trying to create generated by the transformations of the IFS and the picture itself decreases, the more the picture and the attractor for the IFS will be alike.  It's an interesting dilemma, trying to visualize something in terms of small versions of itself, but it is a skill that can be learned, and, because fractals seem to have nearly universal appeal, it's a skill that is highly applicable to commercial graphics. 
 
A Practical Example If I've Ever Seen One.
 
"There is an old saw that says, 'Computers can only do what you tell them to do.'  This is right in one sense, but it misses the point:  you don't know in advance the consequences of what you tell a computer to do; therefore its behavior can be as baffling and surprising and unpredictable to you as that of a person."
                                                            -- Douglas R. Hofstader
 
Let's try to demonstrate how one might approach the design of a fractal in a situation that might arise.  Suppose you wanted a fractal filligree for a corner, so that you could make a border around a piece of paper, such as a certificate of achievement.  Further, assume that the basic shape you want to cover looks like that of Figure 6.87. 
 

Figure 6.87 A corner shape we want to model using IFS fractals
 

The next step is to imagine how little copies of that shape might be arranged to fill out the shape.  Two possible solutions are shown in Figures 6.88 and 6.89. 

88
 


89
Figure 6.88 and 6.89 Two ways to cover the corner shape with self-similar versions of the shape
 
Note that the fit isn't exact -- if it was, we'd have as our fractal the original shape of Figure 6.87, and that wouldn't be very interesting.  It's the mismatch between the pieces and the whole that creates the "holes" of the fractal.  Next, either on graph paper or on a computer using appropriate fractal software, determine the constants of the individual transformations.  Figure 6.90 shows one possible user interface for doing this, and shows the triangles involved for one of the transformations (lower left). 
 

Figure 6.90 A possible Macintosh interface for designing IFS fractals.
 

The complete setup, after all of the triangles haver been entered in, looks like Figure 6.91. 
 

Figure 6.91 The completed covering of the corner shape.
 

The final step, drawing the fractal itself, has to be done on a computer.  The results of the two different "collages' are shown in Figure 6.92.
 

Figure 6.92 The resultant fractals of the two coverings.
 

The Ultimate Image Compression Scheme, Will It Work?
 
Would it be possible to apply the concepts of IFS fractals to image compression?  We have seen that with a few simple equations (the IFS), an image of great complexity can be generated.  Micheal Barnsley of the Georgia Institute of Technology saw the possibilities of using IFS  fractals to produce practical fractal image compression.  More traditional methods of compression utilize comparisons of pixels of an image with nearby pixels.  Look at any picture and you will usually see that there are large areas which have the same color, or change color gradually.  It is relatively easy to generate compression algorithms which take advantage of this.  Barnsley's approach, however, takes advantage of self-similarity in a picture.  If the picture has obvious self-similarity then Barnsley's approach can compress the image to an extreme degree.  Even if it doesn't, there are nearly always unobvious self-similarities -- the trick is to find them.  It's not easy to program a computer so that it sees self-similarities without human guidance; humans are very adept at pattern recognition, but the means by which we are able to pick out similarities between objects is not well understood and difficult to model using digital computers.  Neural networks may be able to do this in a more natural way.  Only the future can say how prevalent fractal compression schemes will become.  
 
 
Sound and Fractals
 
The generation of sound via fractal methods is another interesting avenue which has much potential.  Fractals are, of course, generally asociated with pictures.  Mathematics tends to be a highly visual science, and there certain advantages to the sense of sight, because ideas can be expressed in visual formats that persist over time.  Sound, however, suffers from two distinct disadvantages: there's only one air pressure happening at any particular time at a particular point, and there's no temporal longevity (when no one's playing the note the note's gone, as opposed to a canvas, where, once the painting's painted, the artist is unnecessary).  For the second disadvantage there is no solution, but for the first, it is relatively easy to create a class of linear fractals which are functions in the traditional sense of the word:  for any x there is one and only one y.  To produce linear fractal functions only one change needs to be made to the basic equations:
 
            ,
and     , where  and  are all constants.
 
In other words, the resultant x coordinate can't depend in any way on y.  This kind of transformation is a skew transformation. 
 
The simplest approach to designing a fractal sound is to pick <time, amplitude> pairs that the fractal should pass through .  Given that the number of points to pass though is N, we have N+1 ordered pairs, .  Let's further restrict things so that t0 = 0 and A0 = 0 (0 is defined to be the ambient air pressure), that at tN, A0=0 (we end up at the ambient air pressure), and then normalize time so that tN =1 (the end of the sound occurs at t = 1).  In this case, the equations for the constants a through f for each of the transformations 1 through N are:
 
 
en could be anything, mathematically, but in order to make sure these transformations are contractive it should be in the range   represents the amount that the entire waveform is scaled between the two points tn and tn-1.  Negative numbers mean that the waveform is inverted.  Once you've decided what your resolution should be in time you can calculate enough points along the time axis to scale to whatever length you decide one cycle to be.  It's also probably important to note that each "sample" (the value of the amplitude at any particular time) is often represented on the computer as an 8 bit value, so you need to make sure that your mathematical operations don't overflow.  Below are some linear fractals which are functions.  You can see that some of these waveforms might be interesting to listen to, and might even have some musical value. 

Figure 6.93 - "Some fractal functions and periodic versions of them.
 

Odds and Ends - A couple of benders
 
If a linear transformation is contractive, it is contractive everywhere.  There are many transformations which are contractive, but not contractive everywhere.  The function
                                                             
 
is a good example.  Between  and  this function is contractive. Elsewhere it is not.  If we take a point, say .8, and feed it into the equation, and then feed that result into the equation, and so on, you can see that the result gets smaller and smaller and we approach a resting point, an attractor, at point 0.  A point greater than 1 iteratively fed into the same equation will continue to grow without bounds.  The curious point of 1 itself could be likened to balancing an egg on one end -- it will stay where it is, but any slight push (i.e. any slight variation from exactly 1) and it falls over, either towards 0 or towards infinity.  This principle of domains within which a transformation is contractive is related to the idea of basins of attraction.  In an earlier analogy of a ball bearing in a bowl, we restricted ourselves to situations in which we knew the ball bearing wouldn't fly out of the bowl.  Once out of the bowl, we are virtually guaranteed that the ball won't ever end up at the bottom.  The volume within the bowl can be though of as the basin of attraction of the attractor which resides at its bottom. 
 
It's possible to create fractals with systems of transformations which are not contractive everywhere, as long as we restrict ourselves to the regions of the transformations which are contractive.  As an example, it's not necessary for the constants of the linear transformations a,b,c,d,e and f to actually be constants, as long as, over the domain that we're interested in, their absolute values never become 1 or larger. 
 
If we knew the left and right hand boundaries of our fractal, call them XL  and XR, we could introduce a squared term into our transformation and still guarantee that it's contractive.  Take a look at this way of writing a parabolic equation:
 
      .
 
In this form of the equation the parabola crosses the x-axis at p and q and r is a scaling factor.  If we let XL be p and XR be q, then the only point we need be concerned with is the middle point between them, because that's where the maximum (or minimum) will be (why?).  If we want to make sure that this maximum is no greater than a value h, we find that r is given by
 
                                                                                                                                    
 
Now, let's go back to our original linear transformation equations,
 
            ,
and     ,
 
and consider that for any of the "constants", a,b,d or e, we could substitute a variable expression of the form
 
      ,
 
the only restriction being that at no time can we allow the absolute value of a to approach  1 (Actually things get kind of messy if you get anywhere near that but if messy's what you like…).  At this point we have equations of the form
 
            ,
and     .
 
We could do the same basic thing with the y variable, and we could also substitute in similar equations for the other constants b, d, and e.  In this way we can cause the transformed images to bend parabolically, either as a function of x or y or both.  At the edges of the fractal the squared terms have no influence, but towards the center the effect is more pronounced. 
94

 

95

 

96

Figure 6.94 - 6.96 Sierpinski Triangles with small amounts of non-linear terms introduced.
 

Figures 6.94 through 6.96 show examples of the above algorithms when introduced into the IFS of a Sierpinski Triangle.  Figure 6.94 shows what happens if we modify constant 'a', 6.95 shows what happens if you modify 'b', and 6.96 shows a combination of a modified 'b' and a modified 'd'.  They are by no means works of art, but they do indicate the potential for creating interesting images. 
 
A word to the wise, though -- when you introduce powers into equations that get called over and over and…  things get noticeably slower on computers.  There's always a tradeoff.
 
Finally, one can alter the characteristics of a-f so that they are dependent on another variable.  Let's call it t.  If we restrict the range of t to be between 0 and 1 we can define a general equation for a through f like so (using  as the example):
 
      ,                        where  is the value of a when  is zero, and                                                     is the value of  when  is 1. 
 
[Missing]
Figure 6.97 5 frames of an IFS "movie".
 
In this way we can create families of fractals.  Each value of t produces a different fractal, and one can move from one interesting fractal gradually to  another.  Figure 6.97 shows 5 such "frames" of a sequence.  If your computer is fast enough you can do real-time fractal animation.  A lower end solution to this is to calculate the set of frames at a slower rate and play them back later as a movie. 
 
Conclusion
 
In this section we've seen how linear fractals are designed and have examined the mathematics behind them.  A few examples of practical applications of IFS fractals were discussed.   Linear fractals are arguably the most useful fractal types in graphical design, compression techniques and sound generation.  There are surely many other areas in the arts and sciences which could benefit by examing the roles linear fractals can play.  Each of us brings our own perspective and experiences into the realms of fractal geometry; it is hoped that you will forge a bridge between the things you know best and this strange, recursive kingdom, and discover new worlds previously unseen. 
 
In the next chapter we will examine programming techniques used in creating many of the fractals we have seen here and in earlier chapters.
 
 
 
 
 


[1] Translated from Latin by Leon Richardson of the University of California, Berkeley, (1934) .
[2] Images can be created by entering the Newton's method equation seen in Chapter 7, entering the iteration function in the MandelMovie equation editor or using a predetermined fractal as seen here from Mandella .
[3] Newton-Fourier Imanginary Problem. published in the American Journal of Mathematics 2, 1879.
[4] Created using the equation editor in MandelMovie 1.88.
[5] Using MandelMovie 1.88 or greater with a Macintosh/co-processor you can enter an equation to do Newton's Method example:  is entered as (z*z*z+1)/(3*z*z), with these equations a constant c is not entered.
[6] For which he won the 1924 Noble Prize in Physics.
[7] Studies have shown that listening to higher dimensional music (a Mozart sonata for instance) can increase mental activity that produces higher test scores. From a 1993 study done by psychologists at the University of California at Irvine published in the journal Nature.
[8] Science of Fractal Images (1988), pages 39-44 by Richard Voss for original work on fractal music.
[9] You can create 'randomly oriented' fractals using versions of FractaSketch 1.4 or greater since these type of fractals need to be linked together.
[10] Work fro Jenny Harrison at the University of California at Berkeley.
[11] This principle is used in statistics to calculate standard deviations.
[12] John Horton Conway popularized Cellular Automata with a program he wrote in the 1970's that would simulate cell regeneration called Life.
[13] There is speculation that Richard Voss uses a Gaussian noise transformation in the frequency domain that passes through a 1/f filter. Then the image is accentuated with ray tracing and color dithering. This procedure is very computationally intensive even on powerful machines.
[14] One of them, Loren Carpenter, had originally come from Boeing where he had also worked on fractal landscapes
[15] This group was spun off to start a new computer graphics company, Pixar located in San Rafael and Richmond, California. Pixar became the first company to win an Academy Award  for a computer animated film, Tin Toy (1988).
[16] Articles Scientific American (Sept. 1984) page 156, and Creative Computing (July 1985).
[17] Work done by Hentschel and Procaccia (1984).
[18] Courteously enough Michael Barnsley has started a company called IFS Systems Inc. in Atlanta, Georgia, to develop and market fractal compression schemes.