Sunday, March 20, 2016

Live interface of the Virtual History





In this new world of info, I would like to show how this idea formulated:

Studying film theory and editing, practicing it, then moving onto Shakespeare and watching movies about Shakespeare and reading the plays. That probably sparked the move into technological manipulation of the senses, creating illusions, or, as a subject, temporary delusions...such as dystopias, horror films, action, etc.
The edge of that removed now. Dull as a 50's black and white television show.
All of the best moving films still have edgy moves, so these are useful for creating a system of camera moves to insert...all the edits there still unrendered.
What is removed is the data about the camera moves, not the edit. Then the number of camera moves divided by the length of time of the sequence.
Stills do not count.
Unless they then begin to move later within the sequence.
So, we get results that classify all the range of combos...
one shot moving image that is two minutes long
two shot that is one minute
a three shot which is forty seconds
a four is thirty seconds each shot, etc.

This is then deplored as a fractal wave into a sequence of content.

Now, this is somewhat like the fragmented music video of the early eighties. A lot of trite and not-all- that-well-told story's.
However, the point is to find a optimal place after mixing with spaces that are pre-programmed with camera swarms. The bees inside the hive fly into different combs in different configurations, turn around and look out. Then fly out again, some leave, or more come in, and back they go in, to turn around...
Meanwhile a swarm moves from inside to outside...the swarm being of at least 3, but can go up to 8, there always being the 1 face frame.
This is all viewing in total the 3-d content. Its moving around acording to a spatially dominant point, but skipping out to the other perspective now and then...this sometimes a controlled movement in interactivity, however, it can also still function automatically to link content...as after a brief spell, given the previous sequence, there is a increase or decrease in AI, wherein in a descrease is an increase in options, while a increase is a eventual disappearance as the screen show has become compelling to watch.
There is an earlier example of this, which forms a database of location footage. All of the location footage in Film Commisions in every state should be archived, this archive is then like the coverage of New York during the shooting of the 1933 King Kong.
From the book The making of King Kong by Golder and Turner;
"Schoedsack and his crew filmed establishing shots in the harbor of New York City. Curtiss F8C-5/O2C-1 Helldiver war planes taking off and in flight were filmed at a U.S. Naval airfield on Long Island. Views of New York City were filmed from the Empire State Building for backgrounds in the final scenes and architectural plans for the mooring mast were secured from the building's owners for a mock-up to be constructed on the Hollywood soundstage."
What has been former footage is now early sketches, the first program, just cycling through all the footage on a dateline, which create a 3-d swarm of virtual cameras.
They also organize to a flying pattern...all can be then reprogrammed in the 3-d arena, as the placement of the camera is due to a pre existing swarm configuration.
The standard configuration of 9, 1 constantly framing the face for emo, is swarmed further into the 1st potential set. That 1st potential set is two lobes that center on the face frame, then broaden out into two opposing funnel shaped, lobes, that expand from a axis, which then forms a network, wherein, its 27 camera lobe with flexible axis (or center of focus).

But without the swarm totally occupied, there subswarms where the pattern is in closer yet the placement is not east west or left right in more or less horizontal axis, but instead a oddly distributed field, with slots for varying amount of cameras in more of a bubble.
This bubble can be, on occasion, filled with 63 total frame, but then were are a bug eye looking into something, some 3-d model and thats not practical. However, it does produce an odd limit.
Its kinda like organizing improvisers to play characters online, simply in chat. As one character gets more screen time...due to a simulation of an audiences eye movements.
And then slowly reshuffle a bit. So the fractal tunes in...provided now the space of the content is pre-grammed...with a deployment of 6 camera wide, 3 camera close, 1 camera close up from a third of a circle arc pointed at the characters face, trailing aside, then having two randomly scattered wides shots.

Camera programming design again.


Anyway, the cited books:




Gold Rush Port: The Maritime Archaeology of San Francisco's Waterfront by James Delgado

This is mentioned first. It might be the most hinge point because it occurred to me just how much data Archaeologist physically add to History. They provide a real set and setting of human proportion and interaction. History jumps from to many more sets and settings within it narrative.
But they really add a lot to each other, even as comparative History, analogous develops, patterned, can also be applied in a sequence or format...for instance, the difference between people working on factory floors assembling cars vs people making pots on potters wheels vs people 3-d printing...each one being a function of History.

Quantitative data that is pre-designed models. Press a button and boom you have a technological level, but there being a very very fine grain to the near future...as it is more a catagory than a necessary ranging around a scale. But I digress to the entertainment venture that is to spring forth later.
Web design...err..uh:

One puts on a cardboard mask, the google one or some other folded style. The screen activates and the viewer can see a target which overlays the camera screen. Then, a voice says, along with a blinking line pointy line "icon this way", which does move...its a icon into which the model on the ground must be affixed to. There is frequently plinths and so on where the tablets can get locked onto a single point. Or, they can be programmed. So the person walks over, perhaps stands in a impromptu cluster or is lined up by the teacher or docent, and they lock on the plinth model, then step back to where they want to be and wait.
So at first the voice says "to aim from here, step toward the Hindenburg model and match the screen with the model."
In one of those incredibly intimate yet dispassionate voices like from Logans Run.
Then it asks if this is where they want to be? There is XX:XX amount of time to move.
Which is after the last tablet signs in might be a bit of intro, with nothing really on the screen...a voice explaining...then this says, "Well, then okay, prepare and put the Video Hat in front of your eyes and follow the arrow to see the Hindenburg Disaster in Virtual America.."

After this the image slowly moves into view as the sound of the engine increases in audibility...some kids turn their heads, looking at whatever is occuring in the Virtual space, perhaps the crowd of reporter is right behind them, but they turn back, an icon appears. Its in the far right corner where one can fit a single finger. This is not handiness exercise. One simple presses a button and there is enough space in the side of the cardboard that two fingers can more around..or a thumb can be placed over the screen while the grasp of the hand enables some leverage.
So, there is a icon driven reading one can access. Or even if that is accessed then a larger category pops up or enters stage right, stays far stage right, then exits stage right.

As this interface can be more polite as well, as the user is to be still most of the time while watching the beginning, say, an announcement, screen goes dark, all setting now set, there appears on the horizon, buzzing with hard driving engines, a tiny image...though, it is not the image imposed over a camera view...it is instead a whole world view, which is Air Force Base.

The Hum moves realistically from ear to ear, based on where it is in the sky as it approaches, and therein, the men on the ground can be send attending to utilities, the reporters and those waiting for arrivals clustering around, their speeches cease.
At this point or perhaps somewhere before or a little bit after, depending on the actual staging of the leader animation as it pre stages the actual event...
Anyway, an voice talent begins reading,,,and its the default, which is written in this initial historian narrative to refer and pre-stage the transition to the famous radio announcer.
Soon after, like two or three seconds after this initial voice over narration of Historian so and so, an icon appears and the viewer can switch to two other voices and histories...they of course, then here the voices switching around, and once this happens once, another icon appears that read, "switch back to first voice" or, you can touch the other icon which says, "advance to the next voice".
If they fiddle with this a icon will appear to again switch the Historian..probably an icon of this person...what they choose as their icon. Could be their image, but not always.

The action continues regardless of the choices, the viewers are moving inexorably toward a moment. Unless they press pause, which is a button on the tablet and not part of the overlay on the screen.

The ground crew scrambles...I think its about here the announcer begins...goes on for awhile, but we also will an icon which appears here, first in the series of ways to alter the data.
The viewer can choose the narrator, who has had a single reader, or switch around three narratives with one narrator.

No comments:

Post a Comment