Email News Signup

Want to stay informed of news and updates on PipelineML? Subscribe to our email list and you will be connected and informed.

Email Subscription
To receive announcements and newsletters via email on PipelineML developments, subscribe using this form.  
 

PipelineML Tour

Schema Modularity

One of the design philosophies of GML that is closely aligned with our approach is the use of schema modularity. Creating one PipelineML schema that contains all the data structures and rules needed for every possible pipeline data interchange use case would result in a massive file containing many hundreds of structures and rules. Such a large schema would take a considerable amount of time to process all those rules against a large dataset of pipeline assets each time someone wants to export their data into a PipelineML file. A better approach is to divide up those structures and rules into separate logically-modularized schemas.

package-dependencies-generic

Figure 1 – Package Dependencies or Modularized Schema Diagram in UML

Figure 1 is a Unified Modeling Language (UML) diagram that illustrates this concept. It shows ten schemas as well as their dependencies we have defined so far (although they will likely grow in upcoming years). The center one is the Core schema that contains basic data structures and rules needed to define pipelines and the components from which they are built. It is focused on defining physical objects and their intrinsic properties (that do not change over time). For example, manufacturers design components according to a set of specifications such as material type, nominal pipe size, outside diameter and wall thickness. These are unchanging intrinsic properties and thus would be described in the Core schema. Also, specific components may receive additional attributes following the manufacturing process such as a test pressure rating. Since these are intrinsic attributes, they would be described in the Core schema.

Transient component properties that change over time will be defined in other schemas, defined in the future. This will include attributes discovered through observations and measurements such as operating status, product/s being transported, product flow direction, operating pressure, regulatory classifications, corrosion levels and locations, whether a valve is in an open or closed state, etc. By providing clear, unambiguous differentiation as to what information is stored in each schema it is easier to understand which schemas are needed for any specific use case.

This approach to modularizing schemas allows a vendor who develops, for example, pipeline construction software to utilize this one Core schema to export pipeline construction data into PipelineML. Each of the other nine schemas is dependent upon the Core, imports it, and then extends or adds to it. A software vendor that builds Cathodic Protection data management systems not only needs the Core data structures and rules but also those found in the CathodicProtection schema. Since this schema automatically imports the Core schema, it combines both Core and CathodicProtection into a common set of data structures and rules. This approach simplifies the solution such that any software vendor only needs to reference and support those schemas that are applicable to the capabilities of their software.

Our goal with the first release of PipelineML is to only define the data structures and rules needed for the Core schema. The remaining schemas are colored dark grey to indicate they have been identified for future development but are not included in the initial release. Once the initial version of PipelineML is released and approved by the OGC, we will collectively prioritize the next highest set of use cases to include in a subsequent release of the standard. This approach allows us to gain momentum quickly and then gradually broaden the scope of PipelineML to eventually include all possible data interchange use cases needed by the international oil and gas pipeline industry. The end result will be a set of PipelineML schemas that are all designed to work together (without redundancy or conflicting structures and rules) as a cohesive collective. Any of these individual schemas can be updated in future releases of the standard as discoveries are made by stakeholders.

This concludes the end of this interactive tour of PipelineML. The next recommended step is to go to the Documents section and download a copy of Introduction to PipelineML. The first section contains the same information as provided in this tour. It continues the discussion into the more technical aspects of the data standards development effort.

News Updates