Professor, The University of Tartu
Time: Thursday, 3 September 2015, 9:00 – 10:30
Session Chair: Jörg Desel
It has been over two decades since the first research articles on Business Process Management (BPM) saw light. Much ink has been spilled meantime to build up a discipline out of what is essentially a vision of how work in organizations can be effectively conceptualized and analyzed for the purpose of performance improvement. There is by now a relatively well-established body of methods and tools to instill “process thinking” in organizations and to manage business processes throughout their lifecycle.
A considerable subset of these methods and tools rely on business process models, be it for understanding processes, for preserving and communicating process knowledge, for analyzing, redesigning or automating processes, and even for monitoring them. It is thus not surprising that a lot of research and development in the field of BPM has concentrated on modeling languages, tools and methods, to the extent that the early evolution of the discipline is sometimes associated with the development of modeling languages. Along this line, the discipline has gone through a long convergence and standardization process, starting from proprietary notations such as Event-driven Process Chains (EPCs), moving on to standardization attempts such as UML Activity Diagrams and the XML Process Definition Language (XPDL), followed by a parade of standardization proposals and associated acronyms in the early ’00s (WSFL, XLANG, BPML, WSCI to name a few), the rise and fall of the Business Process Execution Language (BPEL), the broad adoption of the Business Process Model and Notation (BPMN), and the somehow failed struggle to reach a standard case management notation (cf. CMMN).
The overwhelming volume of these developments calls for two questions:
What have we fundamentally learned from the development of modeling languages, tools and methods? And perhaps more importantly, what have we so far failed to fully comprehend?
Another significant subset of methods and tools in the BPM field rely on data, specifically data collected during the execution of business processes. As processes become increasingly digitized, data is moving from being a (necessary) side-product of the execution of business processes, to becoming a central asset that can be leveraged across all phases of the business process lifecycle. This prospect has fueled a stream of research and development on business process data analytics, starting from dashboards, cockpits and process data warehouses, to the era of process mining methods and tools. Along this line, we have seen emerge a number of methods and tools to summarize process execution data, to generate or enhance models using these data, and to understand how the recorded execution of a business process diverges from its modeled behavior or vice-versa.
Again, the overwhelming volume of developments in this field calls for two questions: What have we fundamentally learned from the development of process mining tools and methods? And perhaps more importantly, what have we so far failed to fully comprehend?
This talk will argue that answers to the above questions can be summarized with two concepts: variation and decisions, be them offline (e.g. design-time) or online (runtime). Many if not most developments and open challenges in the field boil down to comprehending, analyzing, executing and monitoring business processes with inherently high levels of variation and with complex decisions. Indeed, the discipline has learned to analyze, optimize and automate routine work that involves well-structured data objects and simple choices, even on relatively large scales. But we are yet to learn how to manage large-scale variation, unstructuredness and complex decision spaces. The emergence of the Internet of Things and cyber-physical systems is likely to only heighten the challenge, as in a world where the number of connections increases exponentially, so does the complexity of options and variations that ought to be accounted for. The coming of age of automated decision making, the maturation of natural language processing as well as advances in heterogeneous data analytics, create significant opportunities to address the challenges that lie ahead for the BPM discipline.
For a while, the trend in BPM has been to simplify by standardization, at different levels. Now it’s time to learn how to embrace variation and the manifold decisions that arise thereof. One thing for sure: A tangled road lies ahead towards BPM 2020.
Marlon Dumas is Professor of Software Engineering at University of Tartu, Estonia. Prior to this appointment he was faculty member at Queensland University of Technology and visiting researcher at SAP Research, Australia. His research interests span across the fields of software engineering, information systems and business process management. His ongoing work focuses on combining data mining and formal methods for analysis and monitoring of business processes. He is corecipient of three best paper awards at international conferences (ETAPS’2006, BPM’2010, BPM’2013), three best student paper awards with his PhD students (EEE’2005, CEC’2009, BPM’2014) and a ten-years most influential paper award at the MODELS’2011 conference. He is also co-inventor of six granted patents and co-author of the textbook “Fundamentals of Business Process Management”, now used in more than 100 universities worldwide.