If you consider the basic tenants of the Reciprocal System, "units of motion" are discrete, like links in a chain. Larson, during his discussion on "direction reversals," states that the only time you can change direction is at the unit boundary. In the Euclidean projection, there are three, orthogonal axes that are divided in 1 natural unit intervals--to use an old computer graphics term, the Euclidean reality is "pixelated" in a fashion similar to what computers do to draw on a monitor. Pixels have gotten really small these days, but if you take a magnifying glass to your monitor, you will see that it is composed of a bunch of square dots of color, and that a diagonal line actually looks like a staircase--it is not smooth. The same happens with curves--they are approximated, because the monitor is made of tiny, illuminated squares. (This was a big problem in the early days of computer graphics, as designers were used to the smooth lines of a drafting board and raised a big fuss over the jagged lines of the lower-resolution displays back in the 1980s.)
Pixels can only be used as a unit, in the sense that a blue pixel will be ALL blue; you cannot start with red on one side and end up with blue on the other. This is analogous to Larson's "discrete unit" postulate. You can only change color (intensity, etc) after you exit one pixel and start another. Angled lines and curves therefore appear jagged. (A technique called anti-aliasing helps reduce this, by fuzzing out the jagged portions by dimming adjacent pixels, as an optical trick).
Consider the same situation in Nature. The observable, measurable universe is also "pixelated" because of the Reciprocal System's discrete unit postulate and the absolute scaled, orthogonal, Euclidean projection. It's a grid of cubes--we just call it "quantized" rather than "pixelated."
Now what happens if you try to draw a circle on a computer monitor that has a radius of 1 pixel? Well, you get a 2x2 square, with a circumference of 8 pixel units (assuming a 1:1 height:width ratio), diameter = 2 units, and PI, the ratio of circumference to diameter, is therefore 4 (not 3.14), due to this pixelation. Programmers that dealt with the early computer graphics, where you programmed at pixel level, know that the perimeter of a pixelated circle is 4x its diameter and that had to be accounted for when a user tried to pick a location on a circle, because it was approximated. It was "light pens" or "tablets" in those days, and the best resolution you could return to the computer was the size of the pixel selected.
Kick the radius up to 2 or 3. You get a series of squares, since the curvature of the "real" circle is still too steep to clip any of the corners. Make the diameter a million, and considerable clipping occurs, and the ratio of circumference to diameter starts to approach the accepted value for PI.
[img]/files/Pi%20is%204.png[/img]
It was this knowledge of pixelated circles that gave me an understanding of Mathis' "pi = 4" concept. By assuming that the limit=1 (rather than lim->0), you are quantizing the system into linear, "pixelated" components, and there is no such thing as a curve--only an approximation of a curve, made with stair-stepped lines. That stair-stepping adds the extra distance to the perimeter to bring PI up to 4.0.
In the yin-yang terms of RS2:
- PI = 3.14159... when using the yin, temporal aspect of measurement (curved or polar). In Larson's "time region," where there is only time and rotation, speed is defined as (1/t)2, not s/t. The "1" in the time region means that space is fixed at unit speed, so the correct units would be (1s/t)2 for speed in the time region. This is in agreement with Mathis' concept of PI as an acceleration, and orbital velocity having the units of v2, not v.
- PI = 4.000 when using the yang, quantized spatial aspect of measurement (linear). Motion in space will approximate a curve, due to this quantization (discrete unit postulate). The value of PI is higher because you cannot make an arc, nor a diagonal, without resolving it to its x,y,z components for measurement (cannot change direction except at a unit boundary, so you cannot "cut through" and take a shortcut across a unit of space).
To check my reasoning, I wrote a short program to pixelate a circular area, and count the number of pixels that defined the area, then divide by the radius. As expected, the larger the radius, the closer to 3.14 the value for the ratio came.
So there are actually three different concepts for this ratio we refer to as PI:
1. Analog circumference / diameter = 3.14159... in all cases (yin aspect).
2. Quantized perimeter / diameter = 4.000 in all cases (yang aspect).
3. Quantized area / radius2 = 4.000 -> 3.14159... as radius->infinity ("Tao of Pi", I guess).
Here are the results:
Code: Select all
Radius Total Filled Pi (area)
1 4 4 4.00000000
2 16 16 4.00000000
3 36 36 4.00000000
4 64 60 3.75000000
5 100 96 3.84000000
6 144 132 3.66666667
7 196 172 3.51020408
8 256 224 3.50000000
9 324 284 3.50617284
10 400 352 3.52000000
Code: Select all
Radius Total Filled Pi (area)
100 40000 31812 3.18120000
200 160000 126424 3.16060000
300 360000 283892 3.15435556
400 640000 504220 3.15137500
500 1000000 787344 3.14937600
600 1440000 1133308 3.14807778
700 1960000 1542092 3.14712653
800 2560000 2013768 3.14651250
900 3240000 2548164 3.14588148
1000 4000000 3145544 3.14554400
While doing these programs, I decided upon a label convention, using "circumference" to describe the analog, yin perimeter of PI, and "perimeter" to define the linear, yang measure, because perimeter is conceptually associated with the measure of land, as a series of linear distances.
So pick your piece of PI by the reference system in use: yin circumference, yang perimeter or area.