User Tools

Site Tools


mupif:mupif

This is an old revision of the document!


MuPIF

A reliable multiscale/multiphysics numerical modeling requires including all relevant physical phenomena along the process chain, typically involving multiple scales, and the combination of knowledge from multiple fields. A pragmatic approach lies in combining existing tools, to build a customized multiphysics simulation chains. In order to achieve such a modular approach a multiphysics integration framework, called MuPIF has been designed, which provides an underlying infrastructure enabling high-level data exchange and steering of individual applications.

Description

Framework Design

One of the main objectives is to develop an integrated modelling platform targeted to multiscale and multi-physics engineering problems. The approach followed in MuPIF is based on an system of distributed, interacting objects designed to solve given problem. The individual objects represent entities in the problem domain, including individual simulation packages, but also the data, such as fields and properties. The abstract classes are introduced for all entities in the model space. They define a common interface that needs to be implemented by any derived class, representing particular implementation of specific component. Such interface concept allows using any derived class on a very abstract level, using common services defined by abstract class, without being concerned with the implementation details of an individual software component. This essentially allows to manipulate all simulation tools using the same interface. Moreover, as the simulation data are represented by objects as well, the platform is independent on particular data format(s), as the exchanged data (such as fields and properties) can be manipulated using the same abstract interface. Therefore, the focus on services is provided by objects (object interfaces) and not on underlying data itself.

The complex simulation pipeline developed in MuPIF consists of top-level script in Python language (called scenario) enriched by newly introduced classes. Even though the platform can be used locally on a single computer orchestrating installed applications, the real strength of the MMP platform is its distributed design, allowing to execute simulation scenarios involving remote applications. MuPIF provides a transparent distributed object system, which takes care of the network communication between the objects when they are distributed over different machines on the network. One can just call a method on a remote object as if it were a local object – the use of remote objects is (almost) transparent. This is achieved by using the concept of proxies representing remote objects, which forward the calls to the remote objects and pass the results back to the calling code. In this way, there is no difference between simulation script for local or distributed case, except for the initialization, where, instead of creating local object, one has to connect to the remote object.

Parallel and distributed environments

In a case of parallel and distributed applicatios, an additional level of complexity has to be adressed. The individual applications can be physically distributed over the network. The importantant role of the framework is to provide a transparent communication mechanism between individual classes that will take care care of the network communication between the objects if necessary. The design of communication layer allows to call a method on a remote object as if it were a local object - the use of remote objects is transparent. This is achieved by the introduction of so called proxies, that forward method calls to the remote objects, and pass results back.

Concept of distributed mapping.

The data retrieval and processing should be reformed in parallel as well, without compromising the scalability. Particularly, the scalable implementation of field mapping is quite chalenging. The key idea is to represent needed remote data on target computing node locally, so that the transfer can be performed in parallel.

Moreover, when source field view will locally cache the remote data, the source field values are transfered only once. This concept of parallel field transfer is illustrated in following figure, where simple interpolation field projection is used. On the computing nodes containing target sub-domains, the field view of source data is set up in a such way that its underlying sub-domain spatially matches the target subdomain. This mapping is represented by MappingContext class. Once the local representation of remote data matching the target sub-domain is available on all target computing nodes, the mapping itself can be done in parallel, without any additional communication.

The setup of mapping context on target application computing nodes requires global representation of remote data. This is needed, because the target application should not be aware of source application deployment. Therefore, an application agents have to be created by individual applications. They essentially hide the distributed character of underlying mesh or field and manage the proper message dispatching to individual computing nodes containing distributed data. The application agent implements the application interface and its role is represent the overall global access point for application. Agent is aware of distributed application data structure, which allows to to execute data request operations efficiently by splitting them based on application partitioning, routing the requests to processes owning the data, and assembling the results. Despite many advantages, the introduction of application agent has also some drawbacks. If all requests are passed throug agent, it may become a bottle neck. Hovewer, due to the distributed nature, multiple data requests can be processed in parallel, creating thread for each request, for example. Also, as discussed in previous example of distributted mapping operation, the agent is needed only for setting up the mapping contexts, which determine mapping of distributed source data. After the mapping contexts are set up, the data transfers from source to target computing nodes can be done in parallel, without the need of communication throug agent - the mapping context contains all data necessary to communicate directly with source computing nodes, as the data distribution is alredy known at this stage.

Implementation

Rather than writing programs, the Python language will be extended by modules, representing interfaces to existing codes, with specific functionality. The emphasis will be on building infrastructure to facilitate the implementation of multi-physic and multi-level simulations. The high-level language serves as a “glue” to tie the modules or components together, to create specialized application. Python language provides the flexibility, interactivity, and extensibility needed for such an approach, thanks to its concise and pseudocode-like syntax, modularity and object-oriented design, introspection and self documentation capability, and the availability of a Numerics extension allowing the efficient storage and manipulation of large amounts of numerical data. The application interface can be conveniently realized by wrapping application code. The process of wrapping code can be automated to a fair extent using SWIG~\cite{Swig}, Boost~\cite{Boost}, or similar tools, which can generate wrapper code for several languages. This approach also allows a single source version of the component code to be maintained.

The initial idea was to build an abstract communication layer, for example on top of XML-RPC protocol. Recently, the Pyro library has been discovered. Pyro is short for PYthon Remote Objects. It is an advanced and powerful Distributed Object Technology system written entirely in Python, that is designed to be very easy to use. When using Pyro, user just designs his/her Python objects. With only a few lines of extra code, Pyro takes care of the network communication between the objects once they are splited over different machines on the network. Care is taken of all the socket programming details, one just call a method on a remote object as if it were a local object - the use of remote objects is (almost) transparent. This is achieved by the introduction of so called proxies. A proxy is a special kind of object that acts as if it were the actual -remote- object. Proxies forward method calls to the remote objects, and pass results back to the calling code. Pyro also provides Naming Service which keeps record of the location of objects.

The utilization of Pyro allows to fully concentrate on application design, the distributed processing and data exchange is conveniently and transparently handled by Pyro. This is particularly convenient in initial phases of project, where the focus is put on design and prototype implementation of the framework.

Documentation

How to get MuPIF

Download of MuPIF release versions:

The development version now hosted at SourceForge http://sourceforge.net/projects/mupif/

Related download

How to use MuPIF

The framework provides a high-level support for mutual data exchange between individual applications. Each application should provide its implementation of Mupif Application Interface (API). This interface is needed for efficient steering and data exchange. This channel allows a framework to call the individual codes at appropriate times, handle exceptional situations, and request/update application data. Such an approach is very flexible and allows communication with particular applications on an abstract level, permitting easy addition/replacement of components.

Framework includes support for different discretization techniques and specific field transfer operators, aware of underlying physical phenomena. The field representation and field exchange methods support various data types (scalar, vector, and tensorial values), independent on actual discretization.

Minimal working example

This minimal working MuPIF script illustrates how to invoke oofem FEM solver using Mupif API, request solution field, and how to visualize this field using MayaVi.

#run this code as "mayavi test.py"
 
#include MuPIF modules
from oofem_interface import *
from mupif import field
from mupif import timestep
from mupif import util
 
# MayaVi stuff
from enthought.mayavi.scripts import mayavi2
from enthought.mayavi.sources.vtk_data_source import VTKDataSource
from enthought.mayavi.modules.outline import Outline
from enthought.mayavi.modules.surface import Surface
from enthought.mayavi.modules.vectors import Vectors
 
 
def main():
    global mayavi
 
    # create new timestep
    tstep = timestep.TimeStep(1.0, 1.0)
    try:
        #create new oofem interface, pass problem input as parameter
        oofem = OOFEM_API("patch100.in")
        #solve the problem
        oofem.solve(tstep)
        #request displacement field
        f = oofem.giveField(field.FieldID.FID_Displacement, tstep)
    except APIError as e:
        print "OOFEM_API error occurred:",e
 
    # initialize MaiaVi visualizer to display the resuts
    # first, create tvtk data source from solution field
    src = util.field2VTKDataSource(f)
 
    #setup MayaVi scene
    mayavi.add_source(src)
    mayavi.add_module(Outline())
    m = Surface()
    m.actor.property.representation='w'
    mayavi.add_module(m)
 
if __name__ == '__main__':
 
        mayavi.new_scene()
        main()
 

Resources

  • B. Patzák, D. Rypl, and J. Kruis. Mupif – a distributed multi-physics integration tool. Advances in Engineering Software, 60–61(0):89 – 97, 2013 (http://www.sciencedirect.com/science/article/pii/S0965997812001329).
  • B. Patzák. Design of a multi-physics integration tool. In B. H. V. Topping, J. M. Adam, F. J. Pallares, R. Bru, and M. L. Romero, editors, Proceedings of the Seventh International Conference on Engineering Computational Technology, Stirlingshire, United Kingdom, 2010. Civil-Comp Press. paper 127.

Similar projects

Authors & Credits

Mupif developpers:

  • Bořek Patzák (Lead Developper)
  • Vit Šmilauer
  • Guillaume Pacquaut
  • Former developpers: Daniel Rypl, Jaroslav Kruis

Acknowledgements:

mupif/mupif.1456825903.txt.gz · Last modified: 2016/03/01 10:51 by bp