Dr Claude Diderich is a strategy consultant specialized in design thinking based business model innovation. He has more than 20 years of experience in business model innovation, strategy design and implementation, products and services development, business architecture optimization, and digital transformation. Through the years as a consultant, Claude has worked with numerous firms and advised them on improving their profitability through creativity and innovation in a digital age.
He is the founder and managing director of innovate.d, a consulting boutique focused on innovation, strategy, and digital transformation advice for service firms. Prior to founding innovate.d in 2011, Claude worked in senior roles at Credit Suisse, UBS, Bank Julius Bär, and Deloitte Consulting.
Claude Diderich holds a doctor ès sciences and a masters in computer science engineering from the Swiss Federal Institute of Technology in Lausanne, a certificate of advanced studies in strategy from the University of St. Gallen, and a specialization certificate in design thinking and innovation from the Darden Business School, University of Virginia. He is a certified EFFAS Financial Analyst and Portfolio Manager (CEFA), FRM certified, a certified New Product Development Professional (NPDP), and a certified Project Management Professional (PMP). He is also a certified facilitator in the LEGO® SERIOUS PLAY® method. Claude is a member of the Strategic Management Society and serves as a member of the editorial review board of the Journal of Business Models.
At least since the advent of Amazon, digital business model innovation is in every manager’s mouth. Many successful firms, including Airbnb, Facebook, and Apple, rely on fundamentals of digital business models to define their strategy. To understand the drivers of their success, it is important to grasp what determines success. The academic answer to this generic question is also generic, namely “strategy”. Strategy, according to Ansoff, Andrews, or Barney is rooted in allocating scarce resources. Strategists, like Kaplan, Norton, or Steyer, see planning as the foundation of strategic success. Porter, another renown strategy researcher, defines success as identifying and exploiting competitive advantages in an industry setting. Others, like Mintzberg, see success rooted in managerial decision making. All these definitions of success fail to consider the specificities of a digital world, including being service oriented, its ever-changing nature, the availability of big data, and the blurring of the notion of industry.
Martin defines strategy as making informed choices about how to play and win the competitive game. This definition comes closer to a digital economy reality. A core element of how to play in a digital world is defining a firm’s business model. The business model framework is the new concept for leading organizational renewal in a customer-centric digital world. The simplest definition of a business model is a description of i) how a firm creates value for its customers and ii) how it captures value for itself. When analyzing business models of successful firms, five key attributes can be identified and related to answers from five key questions:
To design an innovative digital business model, it is important to understand three key differences between business models of brick and mortar firms (e.g. car manufacturers, shopping malls, craftsmen) and those of business models from firms operating in a digital economy.
First, many digital business models target multiple distinct customer segments and create value by connecting them. A typical example is Uber, who connects people seeking transportation from A to B with drivers offering such transportation on a customized basis. Another example are credit card companies, connecting consumers (buying products) with stores (selling products) and banks (handling the transfer of money). The traditional concept of industry is replaced by that of ecosystem.
Second, successfully digital business models focus on a superior understanding and fulfillment of customer needs. Rather than putting technology at the forefront (e.g. blockchain or artificial intelligence) or focusing on scaling a commodity service (e.g. messenger services or payments), as many less effective digital firms try to do, success comes from helping customers get their job done. For example, Amazon addresses the need for choice and availability of books rather than trying to sell books in stock. Hilti offers access to construction tools, like drillers, on demand for specific tasks as a service rather than selling them. WeTransfer supports transferring large files from one user to another instead of selling a medium.
Third, successful digital business models address the old-fashioned competitive advantage question in a novel way. Having a differentiating trait is especially important in the digital world, as it is often all too easy to copy a digital business model. Sometimes, competitive advantage is based on a winner-takes-all approach, like in the case of Facebook. But often, having a distinct hard to imitate differentiating characteristic is key. Access to specific resources (e.g. movies available on Netflix, songs on Spotify) may be considered such a competitive advantage. Against common knowledge, differentiation may come from focusing on specific customer segments, specific jobs to be done, or even specific roles in an ecosystem, rather than trying to satisfy any need from any customer at any cost.
In summary, successful digital business models need to exhibit four traits:
Daniel Traian Pele is an associate professor at Bucharest university of Economic studies, Department of statistics and econometrics. His research areas cover quantitative modeling, financial econometrics, rare events, and statistics.
He has published numerous papers in statistics and finance in ISI indexed journals.
The aim of this paper is to derive the main factors that separates cryptocurrencies from the classical assets, by using various classification techniques applied to the daily time series of log-returns. In this sense, a daily time series of asset returns (either cryptocurrencies or classical assets) can be characterized by a multidimensional vector with statistical components like variance, skewness, kurtosis, tail probability, quantiles, conditional tail expectation or GARCH parameters. By using dimension reduction techniques (Factor Analysis) and classification models (Binary Logistic Regression, Discriminant Analysis, Support Vector Machines, K-means clustering, Variance Components Split methods) for a representative sample of cryptocurrencies, stocks, exchange rates and commodities, we are able to classify cryptocurrencies as a new asset class with unique features in the tails of the log-returns distribution; more, the cryptocurrencies are classified into disjoint clusters, based on their statistical properties. The main result of our paper is the complete separation of the cryptocurrencies from the other type of assets, by using the Maximum Variance Components Split method. In addition, we observe a synchronicity in the evolution of of the cryptocurrencies, compared to the classical assets, mainly due to the tails behaviour of the log-return distribution. The codes are available via www.quantlet.de.
n this paper we applied various classification techniques in order to discriminate between cryptocurrencies and classical assets, like stocks, exchange rates and commodities. Through the means of dimensionality reduction techniques and classification techniques, we proved that most of the variation among cryptocurrencies, stocks, exchanges rates and commodities can be explained by three factors: the tail factor, the moment factor and the memory factor. Our analysis revealed that the main difference between cryptocurrencies and classical assets, in terms of properties of the distribution of daily log-returns, is the tail behaviour, both in the left and in the right tail of the distribution. The moments of the distribution and the GARCH/ARCH parameters are of subliminal importance for discriminating between cryptocurrencies and classical assets.
Based on the tail factor profile, we can conclude that a random asset is likely to be a cryptocurrency if it has the following properties: very long tails of the log-returns distribution (in terms of the left and right quantile and the conditional tail expectation), high variance, high value of the alpha-stable scale parameter and value of the alpha-stable tail index closer to 1. Moreover, cryptocurrencies are completely separated by the other types of assets, as proved by Maximum Variance Components Split method. From the point of view of the risk analysts and regulators, the non-linear classification techniques applied on the factors extracted provide proficient results in order to discriminate between cryptocurrencies and the other assets.
hrough the means of an expanding window approach, we are able to depict the evolutionary dynamics of cryptocurrencies universe and show how the clusters formed by projecting the multidimensional dataset on the main factors converge over time. By looking at the assets’ universe as a complex ecosystem, we are able to conclude that cryptocurrencies exhibit both a synchronic evolution (individual cryptocurrencies develop similar characteristics over time) and a divergent evolution, as different species, compared to classical assets.
Florin Gh. Filip is a senior scientific researcher 1st degree and head of the "Information Science and Technology" Section of the Romanian Academy (2009, 2015, 2019).
He is the author/coauthor of over 350 papers published in international journals (IFAC J Automatica, IFAC J Control Engineering Practice, Annual Reviews in Control, Computers in Industry, System Analysis Modeling Simulation, Large Scale Systems, Techn..and Ec. Development of Economy-TEDE, and so on) and contributed to volumes printed by international publishing houses (Pergamon Press, North Holland, Springer, Elsevier, Kluwer, Chapmann & Hall, and so on). Author/coauthor of 13 monographs and editor/coeditor of 29 contributed volumes and conference proceedings published in Romanian, English, and French by Romanian Academy Publishing House-EAR, Editura Tehnicǎ, Springer, J. Wiley & Sons, Hermes Lavoisier, Paris, Elsevier, American Inst. of Physics and so on. He was an IPC member of more then 60 international conferences held in Europe, USA, South America, Asia and Africa.
Pr. Filip is a visiting professor invited to hold plenary talks, at international conferences and seminars in universities and R&D organizations in Austria, Brazil, Chile, China, Czech Republic, England, France, Germany, Kuwait, Lithuania, Republic of Moldova, Poland, Portugal, Spain, Sweden, and Tunisia.
Almost six decades ago, when presenting the scientific program of Stanford Research Institute, Engelbart (1962) stated: '…By augmenting human intellect we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems'. The present talk is intended to show the role of I&C (Information and Communication) technologies in supporting people to make ever more effective decisions that are better adapted to the current business and technological context.
The talk starts by highlighting the evolving meaning and scope of automation and the role of the human agent in the management and control system architecture (Filip 2020). Then, several paradigms of the modern enterprise and the relevant enabling ICT are reviewed.
A DSS (Decision Support System) is defined, in the context of management and control systems, as an anthropocentric and evolving information system which is meant to implement the functions of a human support system that would otherwise be necessary to help the decision-maker to overcome his/her limits and constraints he/she may encounter when trying to solve complex and complicated decision problems that matter (Filip, Leiviskä 2009). A historical account of DSS evolution is submitted based on the paper of Power et al (2019).
The case of collaborative decision-making processes and the corresponding multi-participant (group) DSS (Filip et al 2017; Konaté et al 2020) are analyzed. The preliminary results of the comparative analysis of the available platforms for the case of crowdsourceing-based decision-making (Ciurea, Filip 2019) are provided.
The original main classes of DSS, namely model-oriented and data-oriented systems, were proposed by Alter (1977). Since then, the I&C technology advanced and new tools such as AI-based tools, Big Data, Cloud and Mobile Computing, Internet of Things have enabled tnew DSS generations (Filip et al 2017). A special attention is paid in the paper to AI (Artificial Intelligence)-based tools. Their usage to support decision-making was foreseen by Simon (1987). The combination of rule based expert systems with mathematical models within DSS (Filip 1991) is reviewed together with the story of the DISPATCHER, a family of early DSS designed to be used in production control in process industries (refineries, petro-chemical plants) and related systems composed of several processing units interconnected via buffer tanks. The concepts of intelligent DSS (Kaklauskas 2015) and digital cognitive agents (Rouse, Spohrer 2018) designed to continuously augment the human agent’s knowledge and problem-solving capabilities are highlighted. The limitations and debatable questions about the usage of AI-based tools are then reviewed.
In his assumptions about the future Age of information, Drucker (1967) forecast that the information would become very cheap. Now, we can notice it is not only very cheap, but also abundant, diverse, complex, valuable and shows a fast dynamic especially in emergency situations as it is the nowadays one caused by the Coronavirus. The Big Data/ Data Science domain emerged and got traction. The concepts drift in data streaming (Li et al 2019) leads to ever more complicated decision situations and problems. In a World Economic Forum report (WEF 2019), it is stated that “Big-data decision making is a value driver for impact at scale, one of differentiators that transforms how technology is implemented, how people interact with technology, and how it affects business decisions”. The DSS has evolved accordingly from model-oriented solutions to data-driven ones. The open questions concerning the Big Data ethical issues (Nair 2020) and the new Dataism paradigm (Harari 2016) are discussed and a service-oriented DSS platform (Candea, Filip 2016) is eventually presented.Selected References
Alter S (1977) A taxonomy of decision support systems. Sloan Management Review, 19 (1): 9-56
Candea C, Filip FG (2016) Towards intelligent collaborative decision support platforms. Studies in Informatics and Control, 25(2): 143-152
Ciurea C, Filip FG (2019) Collaborative platforms for crowdsourcing and consensus-based decisions in multi-participant environments. Informatica Economică 23(2): 5-14
Drucker P F (1967) The manager and the moron. In: Drucker P, Technology, Management and Society: Essays by Peter F. Drucker, Harper & Row, New York, p. 166-177
Engelbart DC (1962) Augmenting Human Intellect: A Conceptual Framework. SRI Project 3578 https://apps.dtic.mil/dtic/tr/fulltext/u2/289565.pdf
Engelbart DC, Lehtman H (1988) Working together. Byte, December: 245-252
Filip FG (1991) System analysis and expert systems techniques for operative decision making. Systems Analysis Modelling Simulation, August
Filip F G (2020) DSS—a class of evolving information systems. In: Dzemyda G., Bernatavičienė J., Kacprzyk J. (eds) Data Science: New Issues, Challenges and Applications. Studies in Computational Intelligence, vol 869. Springer, Cham
Filip F G., Leiviskä K. (2009) Large-scale complex systems. In: Nof S. (eds) Springer Handbook of Automation. Springer Handbooks. Springer, Berlin, Heidelberg
Filip F G, Zamﬁrescu CB, Ciurea C (2017) Computer Supported Collaborative Decision-making. Springer, Cham
Harari YN (2016) Homo Deus: A Brief History of Tomorrow. Random House
Kaklauskas A (2015) Biometric and Intelligent Decision Making Support. Springer
Konaté J, Zaraté P., Gueye A., Camilleri G. (2020) An ontology for collaborative decision making. In: Morais D, Fang L, Horita M. (eds) Group Decision and Negotiation: A Multidisciplinary Perspective. GDN 2020. Lecture Notes in Business Information Processing, vol 388. Springer, Cham, p. 179-191
Lu J, Liu A, Song Y, Zhang G (2019) Data-driven decision support under concept drift in streamed big data. Complex & Intelligent Systems, https://doi.org/10.1007/s40747-019-00124-4
Nair SJ (2020) A review on ethical concerns in big data management. Int. J. Big Data Management, 1(1): 8-25
Power D J, Heavin C, Keenan P (2019) Decision systems redux. Journal of Decision Systems, DOI: 10.1080/12460125.2019.1631683
Rouse W B, Spohrer JC (2018) Automating versus augmenting intelligence. Journal of Enterprise Transformation. DOI: 10.1080/19488289.2018.1424059
Simon H (1987) Two heads are better than one: the collaboration between AI and OR. Interfaces 17(4):8–15
WEF (2019) Fourth Industrial Revolution Beacons of Technology and Innovation in Manufacturing. World Economic Forum
Sihem Romdhani received her Master of Applied Science (MASc) degree in Electrical and Computer Engineering from the University of Waterloo-Canada in 2015. Her academic research was focused on Deep Learning for Speech Recognition.
Sihem has earned multiple awards including the Tunisian Government Sponsorship for graduate studies in Canada and The University of Waterloo Scholarship in 2013.
She is currently working with Veeva Systems in Toronto as a Data Scientist, where she is building Machine Learning models for Natural Language Processing. She has led multiple projects on text parsing, sequence tagging, information extraction from unstructured text data, and sentiment analysis. She has also worked on recommendation systems using different ML algorithms including Reinforcement Learning. Sihem is very interested in AI and how to solve new and challenging problems. Throughout her education, academic research, and work in the industry, she gathered experiences and knowledge that she enjoys sharing by actively doing public presentations. Sihem has been a featured speaker at the Open Data Science Conference since 2018.
AI (Artificial Intelligence) technology is now transforming every industry from manufacturing and life sciences to arts. Thanks to deep learning, we are able to build very sophisticated and highly accurate machine learning models.
One of the most successful areas to have adopted AI is Digital Advertising. Recommender systems have changed the consumer marketing world and reshaped the producer-consumer relationship. These are algorithms that provide recommendations to consumers for products that are similar to their choices. Recommender systems are widely used by websites such as YouTube, Netflix, or Amazon, and they have a significant impact on consumer sales. There are different machine learning techniques for building recommender systems. The latest ones use Embeddings.
n NLP (Natural Language Processing) domain, word embeddings is a technique that uses deep neural networks to learn low-dimensional representations of words, where similar words have similar representations (i.e., embeddings). Word2vec is a method used to efficiently create word embeddings. There are two key principals behind word2vec: the meaning of a word can be inferred from its context, and words with similar meanings tend to appear in similar contexts. Hence, words can get their embeddings by looking at their neighbors (words to the left and to the right of the word to be encoded).
Skipgram is one of word2vec model architectures. It aims to train a model to predict neighboring words using the current word. By doing so, we were able to learn efficient embeddings that preserve the semantic and syntactic relationships between words.
More recently, the concept of embeddings has shown to be effective in other applications outside of NLP domain. Researchers from the Web Search, E-commerce, and Marketplace domains have realized that we can use word2vec to learn embeddings of user actions by treating sequence of user actions as context. Examples include learning representations of items that were browsed or purchased or queries and ads that were clicked. These embeddings have subsequently been leveraged for creating recommendation engines.
Companies like Airbnb, Yahoo, Alibaba, Anghami, and Spotify have all benefitted from using word2vec approach to extract insights from users’ behavior. They were able to efficiently compare, search, and categorize items using embedding representations. As a result, smarter and more powerful recommendation systems have been deployed, allowing real-time personalization in search ranking. As an example, Airbnb used skipgram model for Listing Embedding (vector representations of Airbnb homes) and saw a 21% increase in Clickthrough rate (CTR) on Similar Listing carousel.
Despite all the success, building AI-base recommendation systems is hard. One of the major challenges is building systems that are robust to real-world conditions. There is still a huge gap between building a supervised model, achieving high performance on a static test set, and shipping a valuable product resilient to conditions change. And this is because machine learning systems are not good at generalizing when the underlying data distribution changes (i.e., the input data differs too much from the data they were trained on).
Amid the Covid-19 pandemic, online shopping behavior has radically changed, throwing a wrench into many recommendation engines. Systems trained on pre-pandemic consumer behavior are showing cracks and/or deteriorated accuracy caused by the sudden change in the way people now browse, binge, and buy, according to MIT Technology Review published on May 11, 2020.
To tackle this issue, the AI community needs to implement new approaches and processes for post-deployment monitoring like building an alert system to flag changes, use human-in-the-loop deployments to acquire new labels, and assemble a robust MLOps team.
In addition to creating tremendous value, recommender systems have enormous downside risk if we do not use data carefully. Datasets are critical to AI and machine learning, and they are becoming a key driver of the economy. Building a sophisticated recommender system requires the collection of a massive amount of users’ data. This data usually includes sensitive information, covering almost every aspect of people’s lives. While users have nearly no control over how data they generate are used, data collection often puts personal privacy at risk. Building the foundation for a responsible data economy requires creating new technologies and business models that provide trustworthy protection and control to data owners. Approaches include secure computation through the implementation of cryptographic techniques, secure hardware usage, and the ability to audit. Besides, advances in robust learning from limited and noisy data could help build more sophisticated and resilient recommender systems without compromising privacy.