Close
Notification:  
v2.2.1 Professional
Login
Loading
Wiki About this wiki Volume 1 Volume 2 Test Page Vol 1 - v053 - Errata Vol 1 - v053 - Aux Book Features Vol 1 - v053 - Alternative Format Vol 1 - v054 - Notes Vol 1 - v054 - Pages Vol 1 - v054 - New Paragraph Sources Vol 1 - v055 - The Delivery Method Vol 1 - v055 Notes Vol 1 - v056 Notes Vol 1 - v057 Notes Fundamental Images test -paste in table Test Buttons New Page New Page Where is the Password for Additional Features? Next Word Version FR Issues with TOC and Book Interleaving Dynamic Text Display Everywhere Very Large Books Book content mapping New Behavior Evolutionary News Microsoft Courier Literate Programming Currently Reading We're going to Mars - Mission to Mars 2 New Big Book Links Circular library Wiki Distribution The Mind's You Preservation in the Digital Age - REPRINT Introduction If Words were Flowers Foreword and | or Preface HyperTextopia and the Docuverse Chronology Time Quantum Self Reference print paragraphs of text in pseudo KANJI - Paul Haeberli - 1996 Hypertext that works Les Sous-Sols du Revolu Napoleon romance novel finally released Books and architecture The Archivist - Schuiten -- Peeters Authoring Bots More Book Stats Non-Ownership Collaborative Writing Literary Evolution and the Russian Formalists New Printing Surfaces Failed Time Capsule Methods Toilet Paper Novels Bed Cover Non-Fiction Texting Jargon Finding books in other books with x-rays Data in Motion is Safer Data Rosetta disk Calendar Based Update What we can learn from slow music Media that last for ever Plastic Logic E Books Future or Libraries by Thomas Frey This Book's Seven Wonders Oreilly Montly Subscription Book Borrowed for the Longest time v055 stats Count how many dragees results - January 1 Jen and William's Annual Hangover Brunch- Experiment Results My Name is Zachary, I am 21 and I am hot 10 Literary Exploits - Commented The Tyranny of Gadgets RSVP techniques Book Pricing Algorithn New Links Political Parametrics - 2d to 3d conversion of the American Political Landscape TOPANGA to DOWNTOWN LA - Good Books Graze, Hunt and Browse Expedition Typing without a keyboard Computing_Timeline Software Cracking for the Mass by Google, inc. Fixes for Multi-Level Moving-Image Semantics Chalkbot Hardware Accelerated Bible Code extreme poetry New Page New Page New Page New Page Interview with a chatbot - (c) New Scientist anthropomorphic middle 'man' Reinterpreting Mount Rushmore Books that became algorithms Reading old stones Norsam Technology 219 Years of bets at Cambridge Long Term Backup strategies Recovering Mesopotanian Tablets Carlos Ruiz - Book Cemetary Flexible OLED Foldable displays - what happened to Readius Copyright law issues that inline linking raises Deep Linking - Printing the internet with the Google clause New Page Math Tables keyword reading scheme - teaching reading Best Man Speech Flowchart comments New Page

NOT READY to be read, I made an error to fix

example:  ababab... is a also a no-gradient image


Lately I have been thinking about fundamental images.

The smallest image is a single sample, a point. An image cannot be smaller then a sample.

Such point can have a quantization precision (eg 2 bit, 8 bit, 32 bit float of luminance...)

Image are useless to us unless you can see them. As well at some point extra precision (more samples, more precision per sample) yields no more significance to us the humans with a particular perception spectrum. In 2009 the common man knows the difference between so-called standard and so-called high definition television, knows somewhat the difference between a camera that captures 1000 000 samples versus one that claims 20 000 000 ones...

So at some point the return on investment with extra precision for a particular image fades as you further increase the resolution.

Single Point Images

So a single point sample image (or a color solid) is the base of all imaging. Such can be arbitrarily resize from infinitely small to infinitely large without changing anything.  So a first distinction between the field of all potential images is possibly the average color of all the samples in it (assuming we have to start a sampling grid where the distance between all samples is equal).

Already a first distinction has to be made here. Such is a called colorspace. In photometric modeling something called a white point would be the reflection of the sun on a white board. That value we call 1.0.  From the standpoint of light and sensing media we can also say that 0.0 would be zero photon hitting the board during the capture interval.  From there our language already easily semantically slip, as we are forced to distinguish between mathematical linearity (where 0.5 is 50% grey) and light power of 2 (also known as gamma 2) and our own perception of a greyscale gradient as linear. This we will discuss elsewhere, for our purpose here, defining fundamental images, we will stay in the mathematical world where 0.5 is the center between 0 and 1.0.

So, values under 0 and over 1, undershooting and overshooting, is quite common / possible in image processing. But let's ignore them as well for now.  Our purpose here is to identify if there are basic images that tells us about real or natural images. So the first fundamental or moment of an image would be the average of all it's sample values.

Random Images

At the other extreme we could say that the set of all possible values of a sample times the resolution of the image creates a potential field of all possible images. For our purpose though what matters is a particular subset which fits some perceptual threshold where a minor tiny variations cannot produce a result we can identify as different. This essentially is what allow us to do analog to digital conversions that are useful.  So it converts the problem from an infinite set to a very large pseudo-infinite one.

Since we started our infinity bounding with the idea of a black and white point, the base random images would be one where each pixel in it has the same chance of being anywhere between 0 and 1.0, thus would average to 0.5 as a solid average. That is the higher the number of samples you have in an image (resolution), the more the initial noise field will tend to average to 0.5

Constant Gradient Images

So if you take any image whose size is larger then one sample and average the sample now in 4 quadrants, you will have 4 values, a 2x2 image.  There are various to calculate that, one would be nearest corner (are you in quadrant 1,2,3,4 of the image? and average the pixels from the same quadrant and write that as the pixel value in the 2x2 image. Another technique would be for each corner to assign 1.0 for the current corner (the one on the side of the 2x2 image pixel being computed) and say the corner of the source image has 1.0 of weight and the opposite diagonal corner has 0.0 of weight and as a function of distance this sample in the source image will have a weight in the overall computation of the destination pixel relative to it's distance to the matching corner.  These two techniques will usually produce a slightly different answer. However they will produce 2x2 different pixels. In some case within the epsilon of our computation precision (within the smallest representable value)  There exists a set of images that will return the same value for the 4 pixels and the same value as the average pixel. This already tells us something about the source image. Such 2x2 image we would say has zero gradient. As we increase the size of the image to 3x3, 4x4 pixels  if the source image is not a constant solid color image we will get some pixels that are different.  The 2x2 image should average to the same value that all the samples of the source image averages to.

Constant Gradient Images : Linear Ramps
So a 2x2 image is a particular type of image. It is a mathematically linear ramp. A linear ramp is something that has a constant gradient (the difference between a sample and it's neighbors is the same anywhere in the image).  You can have a linear ramp of any resolution. Scaling the linear ramp from 2x2 to any resolution returns another linear ramp and rescaling that result linear ramp back to 2x2 should return the identity - the same ramp we started with.
 
 
 
 

Constant Gradient Images :  Sample Size Grids

What other Constant Gradient Images would there be? Well an alternating pattern of 0.0 and 1.0 on a line followed by 1.0 and 0.0 on the next line and so on, would create a source field that has a constant gradient (first derivative). In a binary image, there would for a certain resolution only be two possible such image. As the precision of the sample increases the variations of that ideal image also do. In order to have a constant gradient such alternating pattern of 2 values does not need to have a specific set of 2 values, just be the same. Such particular image would average to one value + the other over 2, so if it was 0.0 and 1.0 alternace then the result would be 0.5 as the solid average.
 
 
 
 

What is not a Constant Gradient Image?

Circular ramps for example might have zero as gradient magnitude yet have a gradient direction. Similarly any regular wave pattern that tesselate a grid could be said to have zero of gradient magnitude. Zero Gradient images are often usable to understand the "geometry" of a resampling operation as they have clear equidistant summits in source space allowing us to visually assess the result of a reconstruction into a destination image.
 
 
 
 

Mapping the Potential Image Field to a 2x2 version of the image

(this is wrong, probably under 4x4 is undefined as a ramp cannot be distinguish from a square wave from a triangle wave)

If the 2x2 version of our image has a linear ramp (is not a constant color over the surface) then this already tells us a few things. First the solid average if scaled to 2x2 will have a difference to that value. We can inverse project that to obtain what is known as a planar perspective transform. One thing we are asking here is:
 
we have:
xx
xx
 
what does this tell us about:
 
0000
0xx0
0xx0
0000

about samples outside of the 2x2 image

Furthermore, this further clarify the noise potential field. Now not all images are possible if the result is a plane that passed through our 2x2 image.  Only a slice through that statistical possibility.  So because we started with a particular average color our noise image averaged a particular value - the noise field itself had a color which for now we assume is somewhere between 0.0 and 1.0.  If we assume our samples are between 0.0 and 1.0 and say the average sample is 0.6 this is a bit like saying that half the samples would be between 0.6 and 1.0 and half between 0.0 and 0.6. That would be innacurate since if the average sample was then 0.8 on one side and 0.3 on the other side then the average sample would be 0.55 not 0.6.  0.6/0.55 is about 1.09 so it's actually 0.5 X 1.09 so 54.50% that would be equal or over 0.6. When we know 4 values instead of 1 then we can do something like calculate an interpolated value between to modulate our potential image based on the normalized distance (between 0 and 1) of that potential image. So on one side of the potential noise field the maybe it's 55% of being over 0.6 and on the opposite side maybe it's 53%...

Still a lot of way to go to arrive at a real image. As we explain the first volume in our semi-criticism of Wolframe model of universe, it would take a long time to tune in on a playboy centerfold even if we started with a 2x2 version of such image.

What else does gradient tell us?
Gradient is details. Region of an image without gradient are flat areas. Gradient magnitude is a measure of edges within an image. Another thing that allows us to acquire more resolution of understanding is to understand the gradient at different resolution of an image. So an area with flat gradient at a resolution has much chances of having no gradient at the next resolution level (at higher resolution).  Image with zero gradient magnitude will often scale up to also have zero magnitude. It's possible to desing particular frequency chart that will happen to be invisible under a particular aperture model. We are aware that we can design under sample cases, for example we can shoot a train whose wheel travel at the same rate as a film camera shutter rate so appear as images not to move. We can create some very specific pattern to break any filter design.  One thing gradient tells us is the actual resolution of an image.  An image is rendered, captured at a particular resolution. However all the sample might not be at the same resolution. To simplify as the optics of imaging (for one if you capture in color the different color samples might be spatially adjacent) but ignoring that Imagine you shoot an outdoor picture and for the sake of argument what you capture from the standpoint of an idealized sample is a cone that captures a set of photons. So if a photon who travels at  about 300 000 meters per second so if your exposure is set to 1 second that would be the set of photons within 300 000 meters that happen to hit the sensor. Now another component of the optical system is the lens. The lens can only captured with focus a certain range. If you shoot something you probably noticed some objects are more in focus then others (leaving aside motion blur caused by motion). Depth of field is directly related to gradients to resolution.  It's probably a fine abstraction to say then that different pixels of your image have different resolution. Conceptually say you shot something and it's all blurry and you reshoot the same thing and now it's well defined. If you scaled both images down until they looked the same, that would sort of be telling you the resolution of one versus the other.

Sparse Higher Resolutions

If you need to wear glasses to see better, you already know first hand the difference between two resolutions. Another thing gradient tells us is that if you try to generate an higher resolution image from a lower resolution image then the intial model is something you can sort of imagine more as resolution increases like a starfield then a big blurry thing. If you imagine a pixel being an average of a gaussian distribution set of samples, then you might consider that zooming on a pixel in a resolution extrapolation sense ressembles seeing a gaussian distributed sampling field. If you accept the abstraction that even at capture pixels on the same image can have different resolution then you can imagine an image as a pyramid of different resolutions (where the average color is the summit of the pyramid and the bottom is the maximum definition - the image resolution itself) and therefore where each pixel is somewhere in that pyramid and shoots a cone of resolution - so where these gaussian blob can overlap.  Also if you start seeing images like that you note that visual edges shrinks as the image resolution increases. An edge is proportionally larger at smaller res - not the same size, so why the hallucinated details cluster around larger gradient magnitude.
 
 

 Adaptive Sampling

It's now popular for ray-tracing rendering to perform what is known as adaptive sampling, that is where across a scene to render one varies the number of samples based on some critera which could be the curvature if you like. As we know that the edge of curved objects will need more samples to produce a properly anti-aliased result. Not to be confusing here but a common rendering way to describe things would be a number of samples are collected into a rendered pixel. Imagine a stochastic process that throws dots within a circle (each a ray tracer sample, a ray sent to get color back).  If your first N samples come from different triangles or other decomposition then probably you need more samples (hence adaptive). By the same principle an image reconstruction filter could decide to work harder on certain portions of the image.

Point Spread Function

An imaging system produce a weighted sum over a 2D imaging area, the result pixel sample. Physics is such that photons don't interact. What is of interest to our discussion is in photographic term described as the circle of confusion. The nature of our optics is such that real lenses do not converge to a point but to a spot. The shape of such spot is the Point Spread function. While the Circle of confusion would be a particular number for a particular distance from the lens at which the element will be still perceived as a point. So is also called the largest blur circle.