Email News Signup
Want to stay informed of news and updates on PipelineML? Subscribe to our email list and you will be connected and informed.
PipelineML Tour
XML as a Foundation
There are numerous loosely defined data interchange standards that are based on Microsoft Access, Excel, shapefiles or even CSV files. However, these solutions do not possess the capabilities to clearly define complex structures and rules while also facilitating flexibility, ease-of-use, openness, and extensibility. These and other requirements point to one platform that is well suited for this purpose. XML (eXtensible Markup Language) is the choice of most well-architected data interchange standards. XML provides a powerful suite of technologies that collectively enable us to describe pipeline assets and the activities performed on them in a clear flexible way. Since HTML, the language of the World Wide Web, is an extension of XML, there is a large global resource pool available who understand the fundamentals of XML. It is also human readable such that a person can open an XML file and understand the information being communicated.
One of the most powerful tools within the arsenal of XML is XML Schema (also known as XSD). This is a file written in XML that defines the structure and rules that must be followed for a package of data to conform to an interchange standard. Essentially PipelineML is a set of XML Schemas that define the structures and rules for describing pipeline assets and activities. For those familiar with relational databases such as SQL, an XML Schema defines the data structure in much the same way as a relational data model defines tables, columns, and datatypes. The key difference is that XML Schema provides much more powerful and flexible capabilities than those found in a relational data model (including object-oriented inheritance, extensibility, substitution groups, extensions/restrictions, inclusiveness/exclusiveness, collections, enumerations, dictionaries, controlled vocabularies, metadata, external links, etc.). The end user of PipelineML does not need to understand these concepts and technologies. Rather, the onus is on software vendors to add functionality such that their software can generate PipelineML (export) or consume PipelineML (import) asset data into their software based on these XML Schemas. The end user simply needs to select export to output a PipelineML file or import to consume one into their software application.
Upon its release and approval, we anticipate many software vendors building PipelineML import and export capabilities that follow the structures and rules of these XML Schemas. The end result will be one or more PipelineML XML files. These files can be shared between parties such as different departments within an operator. For example, a pipeline system designer may use AutoCAD software to design all the components needed to create a new pressurized pipeline system. Once the project is complete and approved, they export all the detailed component information out of AutoCAD into a PipelineML file (in the event AutoCAD decides to support the PipelineML data interchange standard). The design department then forwards that PipelineML file to someone responsible for parts acquisition in preparation for construction.
In the next phase of the lifecycle, the parts acquisition group can import the supplied PipelineML file into their data management software application (provided its vendor supports PipelineML). Once imported, they have access to all the detailed information prescribed by the pipeline designer. This same process can continue through the entire lifecycle of the pipeline system including surveying, operations, integrity, regulatory reporting, divestiture, etc. Each time someone interacts with this set of data, they can add more data to it. For example, the person responsible for parts acquisition may add information from the manufacturer about pressure tests (i.e. MTR’s) performed on each of those pipeline components. This information can accrue over time. By the time someone in operations receives the latest version of PipelineML file and imports it into their operational system, they have detailed information that has retained all collected historical knowledge accrued over time about those assets. This is the essence of TVC.
The scenario described above represents a mature state of the PipelineML data interchange standard. We will not support all this functionality in the initial release of the standard. In fact, we have strategically adopted a standards development methodology that starts small and grows as it matures. It will likely take years to achieve the scenario described above. Had we tried to support all possible industry use cases in the first release, the initiative would have bogged down under its own weight. Instead, we have chosen a proven strategy of beginning with a narrow scope of only the most critical use cases and growing it through short iterative lifecycles. Once we release the first version of the standard, we anticipate this milestone garnering attention throughout the oil and gas pipeline industry. This will help draw additional stakeholders to the initiative to provide necessary subject matter expertise in auxiliary spheres of interest to broaden and mature the standard over time. Throughout the development process, as stakeholders identify use cases outside current scope, we document them for potential inclusion in a future release of the standard. This iterative approach allows us to learn, adapt and build consensus over time.