by
Stewart Dickson
23115 Bluebird Drive
Calabasas, CA 91302 USA
Direct 3-D computer printers allow fully concrete, 3-D representations of mathematical systems, directly from the numerical representation.
The author references his work to date in the field of mathematical sculpture.
The author has begun work on integrating text information in Braille into three-dimensional models of mathematical surfaces. Future work, including manipulating computer-specified tactile surface texture in Computer-Aided Design, presents challenges to the technical interfaces in common practise in the Mechanical Prototyping industry. This paper will outline some proposed solutions.
This paper proposes the thesis that a richer, synergistic tactile experience can be afforded by combining the abstract information on a mathematical surface with the surface itself in-the-round in physical form.
The advantage of visual computing is that an image condenses voluminous data to a single representation which can be quickly assimilated. [2] Visual computing affords the scientist a view of the problem which is different from considering raw data or abstract statements. To view an image, the scientist makes broader use of his/her sense of vision.
Visual computing has resulted in unexpected discoveries and breakthroughs in the sciences. In 1975, Benoit Mandelbrot used a computer at IBM to make a graph of a dynamical procedure which was known to be chaotic. [3] Complex dynamics was a "monstrous" topic of inquiry, because there were no known methods for dealing with such systems - least of which was simply graphing the behavior, because so many - billions - of calculations are required. What Benoit found was a graph with symmetry, hierarchical self-similarity and infinite detail, which was only apparent visually. No graph of this kind had ever been seen before. Mandelbrot had to invent a new branch of mathematics - fractal mathematics - to describe what he found.
In 1983 a graduate student in Rio de Janeiro named Celsoe Costa wrote down an equation for what he thought might be a new minimal surface, but the equations were so complex, they obscured the underlying geometry. [4]
David Hoffman at the University of Massachusetts at Amherst enlisted programmer James Hoffman to make computer-generated pictures of Costa's surface. The pictures they made suggested first, that the surface was probably embedded (non-self-intersecting) in three-space - which gave them definite clues as to the approach they should take toward proving this assertion mathematically - and second, that the surface contained straight lines, hence symmetry by reflection through the lines.
The symmetry led Hoffman and William Meeks, III to extrapolate that the surface was radially periodic and that new surfaces of the same class could be achieved by increasing the periodicity. They did so by altering the mathematical description of the surface to be the solution to a boundary-value problem constrained by the behavior of a minimal surface at the periodic lines of symmetry. The result: Hoffman and Meeks proved that Costa's surface was the first example of an infinitely large class of new minimal surfaces which are embedded in three-space.
The technique Hoffman and Meeks used was to make a picture which caused them to modify their mathematical theory and discover something totally unexpected about that theory. They later extended their techniques to find minimal surfaces of more complex geometry and they also created pictures of them. This is a new kind of experimental mathematics and a procedure not far from creative visual art.
These various technologies all build three-dimensional objects via the common principle of dividing the object in software into a sequence of horizontal slices, from bottom to top, which the machine constructs in a physical material and binds together. Thus, these technologies may also be termed various means of Layer-manufacturing [13], each of which uses some kind of three-dimensional computer object slicing software.
Compared to a two-dimensional computer rendering, a physical model in three full dimensions restores a dimension of information which was abstracted away in the perspective rendering of the image [14]. Of course, the actual, 3-D object gives access to a computer model to someone who might not otherwise be able to benefit from computer modeling because of a visual impairment [15]. But, as viewing an image exercises the scientist's visual cortex and integrates this additional processing path into the mental experience of evaluating an abstract hypothesis, might not viewing a sculpture be a more corporeal experience than viewing a two-dimensional image? [16]
I would like to make the claim, to be tested later, that viewing a sculpture is a different sense of apprehension compared to viewing a two-dimensional image. Certainly the tactile experience of an object enhances our understanding of the object. Is the integration of abstract information on an object with the physical object itself a synergistic experience? Is the total effect greater than the sum of the component elements?
In the work of David Hoffman and James Hoffman in the Hoffman-Meeks extrapolations of Costa's three-ended minimal surface, the computer-generated images show the Gauss Map as a color which varies across the surface according to the point-wise orientation of the normal vector to the surface. [17]
Figure 1): Costa's Three-Ended Minimal Surface, Image by David Hoffman and James Hoffman.
Likewise, the Sullivan/Francis/Levy "Optiverse" shows the orientation of the sphere as a color map during the eversion metamorphosis. [18] The 'inside' of the sphere has a different mapping of the normal vector to color from the 'outside', such that the two sides may be identified during the metamorphosis.
Figure 2): Optiverse Sphere Eversion, image by George Francis, John Sullivan and Stuart Levy.
Hanson et al. have used color to encode 4-D "Depth" and the complex phase of their so-called "Fermat" equations. [19]
In many usual case, a parametric surface maps a two-dimensional domain (u, v) to a three-dimensional range (x, y, z) in a one-to-one fashion. That is, every point (x, y, z) in the parametric surface in three-space corresponds to a unique (u, v).
Computer rendering systems typically employ a two-dimensional parameterization of a three-dimensional surface in order to depict a detailed colored texture on the surface. To achieve photorealism in a computer rendering, the image used for the texture map may be derived from a photograph of a real-world object's surface.
In communicating abstract information on a system represented in three-space in a computer visualization system, I propose mapping text to the surface to convey the connection between information-space and real three-space.
The following figure:
Figure 3): Annotated Hyperbolic Paraboloid -- Computer Rendering by Stewart Dickson
Depicts visually-readable captions on a mathematical 3-D surface in-the-round. [20] In particular, depicted in the captions are the rearranged equations and curves one achieves by holding one of the variables in the equation for the hyperbolic paraboloid to a constant value. I.e., the parabolas in the X = 0 and Y = 0 planes, the hyperbolas parallel to the X-Y plane and the degenerate hyperbola (two straight lines) in the X-Y plane at Z = 0.
I believe this is a form of high-level 3-D Information Integration. It restores abstract information on the surface to the physical representation. Typically, when a mathematical surface is cast into physical form -- concretized -- the abstract information which brought about the three-dimensional object is left behind in the computer. There is typically a good deal of verbal explanation required to supplement the object itself. The object does not stand on its own.
Attaching captions to the surface might be a way of restoring the integrity of the abstract system which the object is intended to represent.
In the figure:
Figure 4): Hyperbolic Paraboloid 3-D Model (Stereolithograph) Annotated with Self-Adhesive Braille Captions, image by Stewart Dickson
I have attached captions printed in the DotsPlus proposed Braille mathematical typesetting standard to the surface of a Hyperbolic Paraboloid, rendered in Stereolithography. [21] [22]
Other work has been done on creating in a computer system tactile textures to represent abstract information, as a stand-in for color. Examples have been produced including two-dimensional, tactile geographical maps utilizing multiple, distinct textures to denote political boundaries. [23]
However, there are limitations in today's commonly used computer-aided manufacturing infrastructure which impede what can be done.
Computer representation of tactile fonts and textures today almost always takes place in a text editing system, such as Microsoft Word. It is a strictly two-dimensional view of the world. The height or depth of Braille dots or textures are not represented in any explicit way until they are presented to the embosser or to the thermal-swell paper. [24] [25]
So, the first step toward modeling Braille text directly into three-dimensional models in CAD is to create an explicit, 3-D representation of the Braille font in CAD, along with the system for kerning the type. [26]
Computer rendering systems typically represent surface quality as a hierarchy of 'macrostructure', 'mesostructure' and 'microstructure'. [27] Macrostructure is the gross surface geometry, typically expressed as a polygon hull or parametric spline surface. Microstructure is typically detailed reflected color, expressed as a 'texture' map. Mesostructure is surface 'bump' or displacement information -- more detailed than comfortably expressed in explicit polygons -- and is also typically represented as a binary image, parametrically mapped to the geometry. [28]
Tactile texture, and possibly also tactile (Braille) captions fall into the class of 'mesostructure', which we would like to express more compactly than in an explicit geometrical representation (e.g., in polygons). However, the R-P Industry standards do not support displacement maps nor do the file exchange standards support parametric surfaces. [29]
Parametric surfaces are more convenient than polygon models for applying image-mapped information, because the surface parameterization (u, v) can correspond identically (or to within scale factor and translation) to the dimensions of the rectangular image map. Polygon models, even when derived from parametric equations, may not carry with them useful information on the parametric domain over which they were generated.
However, the geometric object exchange standard generally accepted by the Automated Fabrication industry at present is the so-called '.STL' file -- which is composed of nothing but triangular polygons, without additional (U, V) coordinates.
Proposed solutions include the following: i) A geometric object exchange standard for the Rapid Prototyping industry accepting Topologically Cognizant parametric Patch models which can be sliced. A Layer-Manufacturing Slicing program needs precise knowledge of what is inside the computer-represented 3-D model, separate from the outside -- so that the printer may correctly fill the solid (inside) portions and leave empty the rest of the build space (envelope). [30]
Models composed of a collection of parametric patches (such as Non-Uniform, rational B-Spline -- NURBS), in which patch edges are meant to be coincident are generally not explicitly closed. Additional information is usually required in order to fully 'stitch' patch edges together, so that the object can be known to be 'closed' by a computer program.
This topological problem is an extension of Andrew Glassner's "Winged Edge" model for polygons -- extended to parametric surfaces, in which an edge will contain at least four control vertices, instead of only two endpoints. [31]
ii) Alternately, we may map 'mesostructure' detail information to polygon meshes by devising ways of "Parameterizing" the surfaces. This problem is related to "Parameterization" of implicit surfaces -- those expressed as a function f(x, y, z) = 0. Pedersen describes a method of obtaining piece-wise parametric 'patchinos' for interactive placement of texture (two-dimensional color 'microstructure') on an implicit surface. [32]
Mapping explicitly-modeled 3-D Braille text into surface parametric space is a similar problem to texture mapping. The geometry of each dot is modeled as a polygon mesh. The dots are composed in 3-D by a program for generating Braille characters from ASCII text. The height of each dot is oriented to the normal vector to the surface at the U-V parametric coordinate to which the text string is mapped. We would like to be able to use the features Pedersen has demonstrated with color 'microstructure' for interactively repositioning 3-D mapped tactile text and texture ('mesostructure') on an arbitrarily-formed, curved surface in a 3-D CAD system. [33]
Braille text could also be represented as an image-based surface displacement map. What is required in this case is An R-P slicer which can evaluate surface displacement maps. Such a program or capability does not currently exist in the Rapid Prototyping industry. This will also be required to resolve highly-detailed 3-D texture which one would not want to represent any less compactly in other than an image-based displacement map. [34]
So, what does this have to do with art? Well, the problem I have stated so far is the "hard" problem. This is the obvious problem which can be logically formulated -- a solution to an obvious particular need: Access to 3-D computer graphics by those who cannot see a computer video screen.
But this is only the beginning. Beyond this, I would like to test the following thesis: Is the tactile integration of multi-dimensional abstract-space a Synergistic experience? Is the total experience of tactile mathematics greater than the sum of its constituent elements?
I imagine that the experience of reading tactile captions on a mathematical surface in physical 3-D would be as follows: Assuming the caption describes a curve, and follows that curve across the surface, (as in Figures 3 and 4) then the fingers doing the reading will be following the instantaneous tangent vector to the curve through space as the reader is reading the abstract, mathematical description of that curve.
Furthermore, the hand is also constrained to be oriented according to the tangent vector to the curve and the instantaneous normal vector to the surface as the reader is reading the information on the mathematics of the surface.
I see the potential synergism here. However, I have found that blind people tend to be hand-centric rather than object-centric as they read my objects in 3-D. That is, they will rotate the frame of reference of the object in space while keeping the frame-of-reference of their hands stationary. I don't know whether this relative "reversal" is significant or not. Again, this will have to be tested.
The emphasis in computing is on the Virtual -- to immerse the imagination in this plastic, abstract space -- to further disembody the mind. This is simply the continued tendency of writing and literature.
Art and sculpture have been in trouble for a long time. Here is Brancusi's "Sculpture for the Blind".
Figure 5): Constantin Brancusi, "Sculpture for the Blind (Beginning of the World)"
But this not the way you will see it in the art museum! What you will see is a plexiglas box completely surrounding and covering the sculpture -- protecting the material surface of the sculpture. The philosophical value of this sculpture is lost to its intended audience. That is the tragedy of the plastic arts -- that they are exclusively visual.
For those whose eyes do not work well enough to use a video screen or head-mounted display, the value of visual computing is totally lost. Modern tools like automated fabrication, on the other hand, now enable us to project the internal world outward, into physical space.
And, for the rest of us, casting the virtual into physicality forces the illusion to withstand the light of day -- to test its honesty.
Experiencing a physical object -- which occupies the same space we occupy -- is a different sense of apprehension of the object than seeing a flat, even a moving picture, of the object. Viewing a film clip of an object rotating, the brain has forgotten what the front looks like by the time the back rotates into view. Viewing the physical object, we have a more integrated idea of the whole object.
The composer Harry Partch established the need for a Corporeal Music which encompasses sound, vision and performed ritual. It is a theory of synergy, synaesthesia and a reaction against abstraction in music. It can be said that making physical sculpture from computer-generated designs is a similar reaction against the sterility of abstract data space. Only the physical object relates to us as physical beings. Only the physical object has life. Only the physical object has the power to resonate with our lives.
I would like to propose that in tactile, Corporeal Mathematics, one can achieve a true Integration of mind and body.
[1] Stephen Wolfram, "The Mathematica System for Doing Mathematics by Computer", <http://www.wolfram.com> "The Mathematica Book, Third Edition", (Wolfram Media/Cambridge University Press, 1996) (ISBN 0-9650532-0-2)
[2] T.A. DeFanti and M.D. Brown, "Visualization in Scientific Computing" (chapter), Advances in Computers, Vol. 33, Academic Press, pp. 247-305, Spring, 1991.
[3] Benoit B. Mandelbrot, "The Fractal Geometry of Nature", Freeman and Company, 1983.
[4] David Hoffman, ; "New Embedded Minimal Surfaces", The Mathematical Intelligencer , Vol. 9, No. 3 (1987).
[5] Stewart Dickson, "Mathematica Visualizations", <http://emsh.calarts.edu/~mathart/portfolio/SPD_Math_portfolio.html>
[6] Anshuman Razdan, J.W. Mayer, Ben Steinberg, "Scientific Visualization using Rapid Prototyping Technologies", Proceedings of the Sixth European Conference on Rapid Prototyping, 1997 pp 171-175, Nottingham, U.K.
[7] 3D Systems, Inc., "Solid Imaging and Solid Object Printing", <http://www.3dsystems.com/>
[8] DTM Corporation, "Advanced Rapid Prototyping and Manufacturing Solutions", <http://www.dtm-corp.com/>
[9] Stratasys, <http://www.stratasys.com/>
[10] Helisys, "Layered Material Technology", <http://www.helisys.com/>
[11] Soligen, "Parts Now", <http://www.partsnow.com/>
[12] Z Corporation, "Office Compatible 3D Printers", <http://www.zcorp.com/>
[13] Elizabeth Hodgson, "Creating Art with Layer Manufacture (CALM)", report to TASC, University of Central Lancashire, UK (December 1998) <http://www.uclan.ac.uk/clt/calm/overview.htm>
[14] Ivars Peterson, "Plastic Math", Science News, Vol. 140, No. 5, pp. 65-80 (August 3, 1991).
[15] William J. Skawinski, Carol A. Venanzi, and Ana D. Ofsievich, "REAL VIRTUALITY: The Use of Laser Stereolithography for the Construction of Accurate Molecular Models", Department of Chemical Engineering, Chemistry, and Environmental Science, New Jersey Institute of Technology, <http://www-ec.njit.edu/~skawinsk/nano/nano.html>
[16] Harry Partch, "Genesis of a Music", New York: Da Capo Press, 1949, 1974 .
[17] James Hoffman, "The Gauss Map", Mathematical Sciences Research Institute (MSRI), Berkeley, California. <http://www.msri.org/publications/sgp/jim/geom/surface/maps/gauss/mainc.html>
[18] John M. Sullivan, George Francis and Stuart Levy, "The Optiverse", University of Illinois, 1998 <http://new.math.uiuc.edu/optiverse/>
[19] Andrew J. Hanson and Tamara Munzner and George Francis, "Interactive Methods for Visualizable Geometry", IEEE Computer, Vol. 27, No. 4, pp. 73-83, July 1994. <http://www.geom.umn.edu/docs/research/ieee94/node8.html>
[20] Stewart Dickson, "Braille-Annotated Tactile Models In-The-Round of Three-Dimensional Mathematical Figures", <http://emsh.calarts.edu/~mathart/Annotated_HyperPara.html>
[21] Ibid.
[22] John A. Gardner, "The DotsPlus Tactile Font Set", Journal of Visual Impairment and Blindness, December, 1998, pp. 836-840
[23] John Gardner, Director, Science Access Project, Oregon State University, Personal communication, March, 1999.
[24] ViewPlus Technologies, TIGER Advantage Tactile Graphics and Braille Embosser <http://www.viewplustech.com/products.html>
[25] American Thermoform Corporation, Swell-Touch paper <http://www.atcbrleqp.com/swell.htm>
[26] Army High Performance Computing Research Center, "Wavefront Fonts", <http://www.arc.umn.edu/gvl-software/wavefront-fonts.html>
[27] Y. Yu, K. Dana, H. Rushmeier, S. Marschner, S. Premoze, and Y. Sato, "Image-based Surface Details", SIGGRAPH'2000 course notes, New Orleans, Louisiana, July 2000 <http://www.cs.berkeley.edu/~yyz/publication/>
[28] Steve Upstill, "The RenderMan Companion: A Programmer's Guide to Realistic Computer Graphics", Addison-Wesley, 1989, ISBN 0-201-50868-0
[29] Anshuman Razdan, Director, Partnership for Research in Stereo Modeling (PRISM), Arizona State University, Personal communication, July, 2000.
[30] Ibid.
[31] Andrew Glassner, "Maintaining Winged-Edge Models", Graphics Gems II; James Arvo, ed.; (IV.6 -- pp. 191-201) Academic Press, Inc.; ISBN: 0-12-064480-0.
[32] Hans Kohling Pedersen, "A Framework for Interactive Texturing on Curved Surfaces", SIGGRAPH 96 Conference Proceedings, pp. 295-302 (August, 1996) Addison-Wesley, ISBN 0-201-94800-1;
[33] Anshuman Razdan, Director, Partnership for Research in Stereo Modeling (PRISM), Arizona State University, Personal communication, July, 2000.
[34] Ibid.