Difference between revisions of "X3D V4 Open Meeting"
From Web3D.org
Line 161: | Line 161: | ||
*[https://www.w3.org/TR/shadow-dom Shadow DOM] | *[https://www.w3.org/TR/shadow-dom Shadow DOM] | ||
+ | |||
+ | ===Contribution 6=== | ||
+ | <pre style="white-space: pre-wrap;"> | ||
+ | Please do not go the route (sic!) of a string-based interface for implementing X3D routes. Yes, the DOM has a generic string-based interface, which is really important in general. But not for efficiently handling big 3D data. Any DOM node can additionally provide a "binary" | ||
+ | JS API as well, ideally using Typed Aarrays in JS. | ||
+ | |||
+ | Converting to strings and back will cause huge overhead and will rule out any GPU-based computation and acceleration. The latter is a must in today's environments, especially on mobiles. You do not want to create this overhead for large arrays of vertices or such and have to parse all the numbers again and again. It can also cause numerical inaccuracies in the conversion that may lead to inconsistencies in the binary representation, which can cause gaps in supposedly closed geometry. | ||
+ | |||
+ | BTW, this is exactly why we have created Xflow: To efficiently be able to specify generic typed data arrays (available as GPU buffers in the engine as early as possible), flexibly composite individual buffers into sets of buffers (<data> elements that are define all the data the input for efficient draw calls), and also to process the data as necessary along the way (e.g. flexible animation, image processing, procedural shading, transitions, etc.). | ||
+ | |||
+ | Xflow is actually much more powerful than routes and it fits much better to HTML5 -- in my opinion at least. Funded by Intel we are just extending Xflow to automatically make use of e.g. SIMD instructions (via | ||
+ | SIMD.js) and other JS acceleration techniques. We are also looking at WebAssembly here for better performance even if not going to the GPU. | ||
+ | </pre> |
Revision as of 12:49, 8 June 2016
X3D V4.0 Open Workshop / Meeting June 8th 2016
Contents
Topics
Note: The red question marks at the end of each question are place-holders for any short answers to emerge from the discussions.
- What level of X3D integration into HTML5 do we want? ???
- Do we want to be fully integrated like SVG? ???
- Do we want/need a DOM spec? If so: ???
- Which DOM version should it be based on? ???
- Do we want to fully support all DOM/HTML features? ???
- Do we want to maximize the backwards compatibility of V4.0 with V3.3? Or break away completely? ???
- Do we want to retain SAI? ???
- What features do we want? For example, ???
- How is animation to be handled? The X3D way of TimeSensor and ROUTEs, or an HTML way, such as CSS3 animations, or else JavaScript? ???
- How is user interaction to be handled? The X3D way of Sensors, or the HTML way with event handlers? ???
- Do we need any different nodes? One example might be a mesh node? ???
- Do we want Scripts and Prototypes in HTML5? ???
- How do we want to handle styling? ???
- What profile(s) do we need for HTML? ???
Attendees and contributors
E-mail contributors: Don Brutzman, Leonard Daly, Andreas Plesch, Philipp Slusallek
Meeting Attendees:
Apologies: Don Brutzman, Andreas Plesch
Prior e-mail contributions:
Note: Contributions are presented in chronological order.
Contribution 1
I think the bigger question of what should be done with X3D. Is X3D solely going to exist within HTML or will X3D have a separate life inside and outside of HTML. If the life is solely within HTML, then the questions below become inclusive of all X3D. If there are separate existences, then the first question is what is the cross-compatibility between X3D/HTML and X3D/other.
Contribution 2
Relevant working-group references follow. A lot of excellent work has been accomplished already. X3D Version 4 http://www.web3d.org/x3d4 Web3D Consortium Standards Strategy http://www.web3d.org/strategy X3D Graphics Standards: Specification Relationships http://www.web3d.org/specifications/X3dSpecificationRelationships.png X3D Version 4.0 Development http://www.web3d.org/wiki/index.php/X3D_version_4.0_Development A 5-10 minute quicklook discussion across these resources might help. We are pretty far up X3D4 Mountain already! The posted discussion-topics list is a good start for renewed activity, and an important way to keep track of everyone's many valuable ideas. Suggestion: create some kind of topics-discussion page, probably easily linked off the preceding wiki page. My general inputs for each of these topics are guiding questions: a. What do the HTML5/DOM/CSS/SVG/MathML specifications actually say? b. How is cross-language HTML page integration actually accomplished, as shown in best practices by key exemplars? c. What is the minimal addition needed to achieve a given technical goal using current X3D capabilities? Editorial observation: the word "want" appears 9 times in this list... Understandable from common usage, but not a very good way to achieve consensus over a long-term effort. Also not very useful for measuring successful resolution. Pragmatic engineering rephrase: "what problem are you trying to fix?" Over 20 years of successful working-group + community efforts can guide us in these endeavors - we know how to succeed together. An effective path for building consensus is to: - define goals that are illustrated by use cases, - derive technical requirements, - perform gap analysis, and then - execute loosely coordinated task accomplishment according to each participant's priorities. How to execute each specification addition: write prose, create examples, implement, evaluate. Repeat until done, topic by topic.
References:
Contribution 3
The discussion on introducing an id field seemed to point towards the need to have fuller integration in the sense that it is difficult to isolate features. It may be necessary to define a x3d dom similar to the svg dom, with the corresponding interfaces. svg is very successful on the web but it took a long time to arrive there. x3dom has a dual graph approach. There is the x3d graph and in parallel the page dom graph which are kept in sync but are both fully populated. Johannes Behr would know better how to explain the concept. It looks like FHG decided that x3dom is now considered community (only?) supported. This probably means it will be out of sync as newer web browsers arrive, or webgl is updated. I explored Aframe a bit more. It will be popular for VR. It is still in flux and evolves rapidly. The developers (mozilla) focus on its basic architecture (which is non-hierarchical, a composable component system) and expects users to use javascript to develop more advanced functionality (in the form of shareable components). So it is quite different, fun for developers, and for basic scenes easy for consumers. Since most mobile VR content at this point is basic (mostly video spheres and panos), it is a good solution for many. (As a test I also implemented indexedfaceset as an Aframe component, and it was pretty easy - after learning some Three.js. So it would be possible to have x3d geometry nodes on top of aframe. Protos, events and routes are another matter but also may not be impossible). There is still space for x3d as a more permanent, and optionally sophisticated 3d content format on the web. Event system: My limited understanding is that on a web page, the browser emits events when certain things happen. Custom events can also be emitted by js code (via DOM functions) for any purpose. (All ?) events have a time stamp and can have data attached. Then, events can be listened to. There is no restriction to listening, eg. all existing events are available to any listener. A listener then invokes a handler which does something related to the event. js code can consume, cancel, or relay events as needed (via DOM functions). It is not unusual that many events are managed on a web page. events can be used to guarantee that there is a sequence of processing. So how does the x3d event system relate ? There is a cascade, and directivity. How long does an event live ? one frame ? Until it fully cascaded through the scene graph ? Since x3dom and cobweb are currently the only options, from a practical stand point a question to ask may be this: what is needed to make x3dom and cobweb easy to use and interact with on a web page ? Typically, the web page would provide an UI, the connection to databases or other sources of data, and the x3d scene is responsible for rendering, and interacting with the 3d content. For VR, the UI would need to be in the scene, but connections and data sources would still be handled by the web page. Cobweb in effect allows use of the defined SAI functions. Is it possible to define a wrapper around these functions to allow a DOM like API (createElement, element.setAttribute .. element = null) ? It may be since they are similar anyways and it would go a long way. But it still would not be sufficient to let other js libraries such as D3.js or react control and modify a scene since they would expect x3d nodes to be real DOM elements. VR: A current issue is control devices. It would be probably useful to go over the spec. and see where there is an implicit assumption that mouse or keyboard input is available. VR HMDs have different controls (head position and orientation(pose), one button) and hand held controllers (gamepads, special sticks with their own position/orientations) or the tracked hands themselves become more popular. In VR, you do want to your hands in some way. Perhaps, it makes sense to have <Right/LeftHand/> nodes paralleling <Viewpoint/> with position/orientation fields which can be routed to transforms to manipulate objects ? How a browser would feed the <Hand> nodes would be up to the browser. InstantReality has a generic IOSensor.
Contribution 4
I am not sure that I will be able to join the meeting, so let me present some of our ideas in this context by email already. As you might know we have done a lot of work on declarative 3D -- in the alternate universe called XML3D. The reason for this unfortunate split, lies in the fact that it has been rather difficult to bridge the gap between our ideas of a minimum and generic extensions to HTML-5 to enable declarative 3D while staying as close as possible to the current Web technology stack on the one side and the need for backward compatibility that people strived for in X3D(OM). Now, there may be a quite interesting these two world views could be resolved with each other. The basic idea for that was already proposed and discussed between us and the X3DOM group: Merging X3DOM and XML3D by identifying the (significant and large!) common core and a layering X3D/XMl3D compatible interfaces on top of this (or eventually identifying common set of Dec3D elements). Unfortunately, this idea was not really picked up by anyone back then. However, with newer technology like WebComponent (a version of prototypes in HTML5) this now becomes a highly interesting and much more practical option. We are actually exploring this right now. First results look very promising and a first paper on this has just been accepted for Web3D this year. At the core of this approach we are using a clean interface to rendering engines, where we use Three.js a default option. Other engines like game engines or ray tracing etc. would be alternatives. (We are also exploring server-based real-time ray tracing based on a generic real-time synchronization layer between scene descriptions as one example.) On top of this is a slightly refined generic data handling layer that is derived from our Xflow. Xflow has been tremendously useful for us and makes data management significantly easier than with the specialized nodes in X3D. It is the perfect building block for WebComponents. We are also integrating into this layer a programmable data processing engine that is derived from our flexible shade.js compiler for programmable shading. Everything on top of this is essentially fair game for WebComponents (similar to a-frame, but with mny more options). The more specialized and often domain-specific nodes from X3D would be prime examples for this. We have actually started to implemented some of the basic X3D nodes already, plus the non-core XML3D nodes. Pretty much along the lines of what we discussed with X3DOM several years ago. Given the powerful underlying engine, it actually becomes rather straight forward to implement these nodes as Web Components, especially the X3DOM subset. But we are just starting. We are even exploring having a public repository of WebComponents that people could develop independently and that get automatically loaded when referenced in a scene (subject to some security policy, of course). Talk about leveraging the power of the distributed web :-). We are finalizing the paper for the final version right now but can make a preprint available as soon as this is ready. People would be more than welcome to help design and develop this further. Maybe this could also be an interesting basis for some of the work on X3D V4?
Contribution 5
As repeated a few days ago, I remain keen for us to establish some kind of X3D v4 wiki page. We need to collaboratively show how these different categories of concepts can be defined along with their associated pros and cons. Email and meeting discussions can then grow those sections effectively, to show that use-case goals and design requirements can be defined and met. Wondering if the x3dom dual-graph page dom structures align with the draft work on Shadow DOM at W3C. https://www.w3.org/TR/shadow-dom I'm hoping that John Carlson gets a chance to look at whether his JSON prototype expander might be adaptable as part of x3dom - that would be pretty valuable. Although Fraunhofer has outstanding prototype support in their Instant Reality engine, it has never been clear why that code can't simply be applied in x3dom as well. Certainly a consistent approach would seem to make both codebases more coherent and maintainable for them. If Fraunhofer refuses to adapt or release the Instant Reality prototype code then we will just have to do it ourselves... John's work seems like a big step in that direction. Also thanks for pointing out important questions about Fraunhofer's stewardship of the X3DOM project. It will be good to learn more about their intentions so that our community can align effectively. There has been no handoff. Regarding the X3D event system: a changing value of any field in any node can be sent as an input to any field in any node, as long as types strictly match (apples to apples). Rephrased: a ROUTE passes a time-stamped value from one node to another. Internal to an X3D scene graph, that has been implemented dozens of times. Seems extremely simple. External to an X3D scene graph, meaning via a browser using the Scene Access Interface (SAI), mechanisms are similarly well defined. Since the DOM is string based, and since any X3D event value can be expressed as a string, it seems like we have a straight connect-the-dots approach awaiting us. Forgive me for using a four-letter word, but if interested individuals might actually _read_ the HTML5/DOM and X3D specifications, then the answers to most implementation & alignment questions are likely spelled out for us.
Reference:
Contribution 6
Please do not go the route (sic!) of a string-based interface for implementing X3D routes. Yes, the DOM has a generic string-based interface, which is really important in general. But not for efficiently handling big 3D data. Any DOM node can additionally provide a "binary" JS API as well, ideally using Typed Aarrays in JS. Converting to strings and back will cause huge overhead and will rule out any GPU-based computation and acceleration. The latter is a must in today's environments, especially on mobiles. You do not want to create this overhead for large arrays of vertices or such and have to parse all the numbers again and again. It can also cause numerical inaccuracies in the conversion that may lead to inconsistencies in the binary representation, which can cause gaps in supposedly closed geometry. BTW, this is exactly why we have created Xflow: To efficiently be able to specify generic typed data arrays (available as GPU buffers in the engine as early as possible), flexibly composite individual buffers into sets of buffers (<data> elements that are define all the data the input for efficient draw calls), and also to process the data as necessary along the way (e.g. flexible animation, image processing, procedural shading, transitions, etc.). Xflow is actually much more powerful than routes and it fits much better to HTML5 -- in my opinion at least. Funded by Intel we are just extending Xflow to automatically make use of e.g. SIMD instructions (via SIMD.js) and other JS acceleration techniques. We are also looking at WebAssembly here for better performance even if not going to the GPU.