Review Article Open Access December 18, 2020

Event-Driven Architectures for Real-Time Regulatory Monitoring in Global Banking

1
Senior Data specialist Data Engineering, USA
Page(s): 1-14
Received
October 09, 2020
Revised
November 22, 2020
Accepted
December 10, 2020
Published
December 18, 2020
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.
Copyright: Copyright © The Author(s), 2020. Published by Scientific Publications

Abstract

The global banking industry is subject to ever-growing regulatory requirements, designed to prevent financial tour de force repeats tearing through the world economy. The changes are incomplete and new rules being enacted each year. Implementing and executing these rules and regulations requires the guiding principles from senior management to reach the product desks in a clear and efficient way. Technical systems must implement these rules. Differences in interpretation, implementation, and warnings must be addressed during normal operations. Most importantly, systems must provide warning alerts to management and the business as early as possible, to allow for proper handling. History has shown that the importance of early warnings has been overlooked repeatedly. Real-time capabilities are essential to meet these business needs. Organizations must therefore be ready to embrace a next-generation architecture that enables real-time alert and warning generation. Systems based on a streaming architecture, combined with systems enabling the real-time flow of events between domains supported by orchestration, provide a solid foundation to meet these requirements.

1. Introduction

In a global financial system, analysts and regulators keep watch on trading actions, money flows, transaction patterns, risk management practices, balance sheet construction, and business models. Numerous regulations and multitudes of supervision functions involve proof that neither regulatory nor criminal law has been violated. Although many regulatory frameworks specify requirements for monitoring compliance, only a few of these frameworks require upfront real-time proof of compliance. And there are even fewer technical solutions that offer it [1]. The low number of available solutions deserves a closer look: Which monitoring requirements are described in regulatory frameworks and which are not? What makes real-time proof of compliance technically so difficult? What architectural style could help fulfill those needs? These questions form the basis for an analysis that examines the requirements for real-time regulatory monitoring in the area of banking and trading and derives implications for an architectural style that can deal with related challenges.

A systematic overview of supervisory requirements shows that regulatory monitoring requirements fall into one of two categories: technology-agnostic requirements, which demand proof of compliance for a specific time period, and technology-suggestive requirements, for which supervisory bodies expect continuous real-time monitoring of specific indicators [2]. Most such requirements concern trading activities and money flows. Numerous requirements, especially in banking supervision, are technology-agnostic but still create challenges for compliance. Examples include Basel III's LCR and NSFR regulations, which specify that institutions comply with a minimum LCR and NSFR at all times but are not designed for real-time proof of compliance [3]. For such requirements, systems designers can implement alerting systems that generate warnings as the values of the tested indicators approach the threshold values specified in the regulatory text.

1.1. Overview of the Study’s Scope and Purpose

Given the inherent latency in our traditional run-the-bank technology stacks, the NEXUS team has explored event-driven architectures as a means to provide real-time regulatory monitoring for financial institutions. Modern regulatory requirements call for near-real-time transaction monitoring; yet, these requirements can no longer be met with independent batch monitoring systems that provide results well after the transactions have been conducted and recorded in the bank’s systems [4]. A move to a more event-centric and real-time approach, not only in handling risk but across the entire bank, is being evaluated. Towards this goal, real-time monitoring, analytics, and control of banking operations is targeted by designing and implementing data pipelines that provide the required solutions [5].

Event-driven architecture (EDA) is a data-centric architecture style in which the data that moves through the system together with the operations performed on the data is the focus of various considerations such as design, implementation, operations, and monitoring. EDAs cover a broad span of architectural styles [6]. At one end of the spectrum are systems that handle data in event streams; that is, data flows continuously with respect to time. The other end of the spectrum encompasses data pipelines that consist of sequences of reporting and analytics operations that can be triggered in batches or periodically and still satisfy real-time use cases. An overview of EDA-oriented architectural styles that are becoming increasingly important for designing, developing, and operating run-the-bank systems is presented.

2. The Regulatory Landscape and Real-Time Monitoring Needs

Against a complex regulatory backdrop, banks must meet arduous requirements with little operational delay, or risk prohibitive remediation costs [7]. Event-Driven Architectures (EDA) technologies and methods help organizations listen for events that affect their business; they can be a crucial asset in simultaneously achieving compliance and improved customer service and experience.

Over the last decade, many major banks and financial institutions have been subject to significant regulatory scrutiny, resulting in the imposition of heavy fines and penalties. Regulators frequently require organizations to develop capabilities that enable them to detect and respond to specific events or activities, yet these capabilities are often delivered after delays that exceed regulatory expectations [8]. During conversations with leading tech organizations on the challenges customers face in developing such capabilities, the same topics have recurred repeatedly. Event-Driven Architectures provide a way to mature such capabilities, yet the tenets of Event-Driven Architectures are often misunderstood or underutilized.

Equation 1) Regulatory-monitoring equations

Liquidity Coverage Ratio (LCR)

Definition (Basel): banks hold a stock of HQLA to cover total net cash outflows over 30 days, and the ratio must be ≥ 100%.

Step-by-step derivation

  1. Let
    • H = Stock of High-Quality Liquid Assets (HQLA)
    • O = Total expected cash outflows over next 30 days (stress)
    • I = Total expected cash inflows over next 30 days (stress, capped in Basel rules; cap omitted here for algebra clarity)
  2. Net cash outflow over 30 days:
  3. N=O-I
  4. LCR is “coverage” of net outflow by liquid assets:
  5. LCR=HN
  6. Minimum requirement:
  7. LCR1equivalently 100%

Real-time early warning (as suggested by alerting idea)

Let regulatory thresh warning margin δ>0:

θwarn=θ+δ

Trigger warnings:

Warning=1LCR<θwarn0otherwise

Net Stable Funding Ratio (NSFR)

Definition (Basel):

NSFR=Available Stable Funding (ASF)Required Stable Funding (RSF)100%

and it should be at least 100% “on an ongoing basis.”

Step-by-step derivation

  1. Let
  2. A=ASFand R=RSF
  3. Ratio form:
  4. NSFR=AR
  5. Minimum requirement:
  6. AR1
  7. Rearranging into a funding “gap” view:
  8. A-R0

So, a real-time system can alert directly on:

Gap=A-RAlert if Gap<0
2.1. Key Considerations for Compliance and Operational Efficiency

Banks execute across-border transactions through libraries of correspondent banking relationships [9]. To minimize transaction costs while complying with regulations in the coverage jurisdictions, banks deploy transaction-monitoring systems to identify potentially suspicious transactions for further investigation, decision making, appropriate action and, if warranted, reporting to regulators.

In some situations, however, active regulatory monitoring of transactions for not only compliance but also risk management, operational efficiency and bank governance purposes may justify transaction monitoring in sub-second timeframes [10]. Such use cases typically arise in response to bursts of transaction activity associated with a significant geopolitical or economic event or crisis and/or transactions involving parties or jurisdictions with heightened risk.oom fields associated with live transactions, such as previous transaction history, recent news coverage, bank governance and rating information, and risk and exposure levels, the volume of transaction data in banks’ systems can be further enlarged by supporting event-driven data sampling and other basically temporal queries on streaming data sources.

3. Fundamentals of Event-Driven Architectures

Event-driven architectures (EDAs) provide a conceptual and technological framework for building real-time systems. EDA refers to a set of patterns and technologies that enable the generation, transmission, reception, storage, and processing of events—data changes—and their related notifications [11]. An event-driven application is built with event-driven concepts and technologies at its core, with events playing a central role in system interactions and integrations. EDA allows for real-time applications that are reactive, scalable, and decoupled.

At the core of event-driven systems are events—state changes of interest in a business context. A formal definition of an event is the occurrence of a change or action that captures a complete and consistent transition of data between two states—the state before the change and the state after the change. The complete transition encapsulates all changes that occurred during the action but that may not have been reported. Because multiple changes can occur in parallel, an event may encapsulate multiple changes affecting the same entity. Events can be classified as actions (responses) or notifications (alerts) and as data related to a specific action/notification or data that is reported per se. Events can also be expressed by means of contracts that define their structure and semantics and at which points in time they are produced and consumed [12].

3.1. Core Concepts and Patterns

Central to the effectiveness of event-driven architectures are the core concepts of event, event stream, and event channel, along with numerous well-established design patterns that facilitate their proper use. An event is an immutable record of a change of state that arrives in the form of a single message published by the entity in charge of executing the change [13]. Events can occur sporadically in unpredictable volumes. An event stream collects events of the same type emitted over time. An event channel is a logical grouping of semantically related events, dedicated to a specific API, that adheres to an agreed data model or schema. Event notifications and commands are primary subcategories of events, respectively indicating a state that is stable, such as completion of a transaction or an authorized store visit, and a state that is neither stable nor obvious, such as the execution of a purchase with changes to the balances of two accounts. Commands are only executed once.

Multiple standard enterprise integration patterns—source, target, message-driven beans, publish–subscribe, fan-out, event accumulator, and event distributor and router—play a crucial role in enabling effective event-driven architectures. Real-time monitoring systems for regulatory compliance in cross-border banking operations clearly illustrate the benefits of using a native event-driven style and the interplay of testing and observability [14].

3.2. Messaging Protocols and Data Contracts

In event-driven architecture, communication between different components relies on messages conveyed using messaging protocols [15]. The choice of protocol depends on several aspects, including whether subscribers are interested in a subset of all events, whether they require all events that match the filter, whether their state needs to be preserved, and the required delivery guarantees. Protocols can be grouped into request-reply protocols (also called point-to-point protocols) and publish-subscribe protocols. In request-reply protocols, one sender sends a message to a receiver, and this receiver sends a reply back to the sender. In publish-subscribe protocols, the publisher sends a message to a topic, and then the broker delivers it to all subscribers of this topic. Furthermore, in push protocols, the broker forwards each message to the receivers as soon as possible, while in pull protocols, the receivers fetch messages from the broker whenever they are ready.

For data privacy and confidentiality purposes, no subscriber wants to receive messages that contain data of another subscriber. In other words, subscribers can see messages on the same topic only if they have the same data access control rights. An access control mechanism should be used to ensure this property [16]. In some cases, a subscriber wants to see only some of the events generated on a topic. Distribution lists can be used to provide this filtering mechanism. Moreover, subscribers can have their states preserved by the broker, so when they reconnect after a failure, they can pick up where they left off. Events that arrived during the downtime are replayed to them. This feature is very important for reliable systems. A subscriber also can set up some events to receive guaranteed delivery of messages. In this case, when a message arrives at the broker, it is stored in a persistent storage until some time passes after the delivery to all receivers, in order to recover its state.

4. Architectural Styles for Real-Time Monitoring

Near-real-time monitoring is primarily concerned with the ability to support decision making within a tolerable time range so decisions can be based on, for example, close to real-time data or supporting analytics. Thus, decision latency varies according to the size of the organization, its transactions, and volumes [17].

Two basic scenarios are identified: streaming-based event monitoring (or quasi-real-time) for situations with very high traffic and event-driven monitoring flows for applications where speed and volume are less critical. The first scenario exploits the capabilities of processing streams, based on windowing algorithms that aggregate data in time-spaced batches. Current architectures support massive-oriented near-real-time decision making with high volume and speed, capable of detecting critical events such as an excess of money transfers within a short time interval. However, such applications are less frequently implemented because of the investment needed and the lack of clarity in specifying detection rules and in taking actions on alerts. The second scenario is similar to traditional EDA core. Applications put events on a bus, and other components subscribe to them. Logic is applied to events using orchestration tools, closing a data-driven workflow. This monitoring style is commonly categorized as a near-real-time decision mechanism [18].

Equation 2) Core event-driven equations

State-transition model of an event

  1. Let the system/ent action be S- and right after be S+.
  2. Let a be the action (transaction, booking, transfer, etc.).
  3. State transition function: S+=fS-,a
  4. An event can be represented as a record of that transition: e=id,t,a,S-,S+
4.1. Streaming-Based Monitoring Pipelines

Technological change is often said to happen at an exponential rate, and a complementary phenomenon is observable in legion. The regulatory landscape for financial institutions operating internationally is being fundamentally redesigned. Whereas in the past comprehensive compliance solutions had to be built only once a decade, regulators are streamlining and shortening the cycles by carefully identified portions [19]. They then request data from market players for a specific period, often related to an emerging risk, because an entity was under observation due to irregular activities, and so on. These offers of regulatory sandboxing, or even, days of establishment for the players that may only now be entering a market, are an excellent opportunity to run a business effectively.

The proposal at the level of real-time monitoring not only supports a company’s compliance department during cyclical, multi-annual and often monotonous activities but also aims to establish a living platform for continuous architecture updates [20]. With the base for monitoring in place, it allows expansion with new future regulatory requests through streaming-based data joins, because messaging contracts have already been established. Those contracts govern the transmission structure, essential for massive data exchange between companies and authorities, and add extra quality controls so that legitimacy is based on a fit-for-purpose process with complete data creditability and chain of custody.

4.2. Event-Driven Data Flows and Orchestration

Event-driven orchestration allows reactive or triggered reactions in configurations usually referred to as service-level business processes or workflows. In their most basic form, these orchestrations react to individual events with little or no contextual Dataverse information. However, more advanced use cases can combine orchestration with streaming-style scenarios [21]. For example, application-specific context or topology information could be merged on the fly with transaction data and other events to define sensitive-transaction notification thresholds.

Such integrations are helpful for fraud-detection scenarios, where lower-priority triggers can be defined on the basis of statistical patterns, up to full machine-learning support. Streaming connectors may also ease integration into traditional event-driven process automation solutions, allowing separation of event-generating systems and the orchestration workload from event-consuming systems. Additionally, traditional business-process orchestration servers based on event-driven notification and reaction patterns, such as Cognifide’s ASSURE, can be used transparently as part of the overall architecture, and occasionally also combined with event-detection engines like Drools Flow for real-time monitoring of services built using service-oriented architecture [22].

The centralization of event generation around the first applications deploying real-time external reporting or internal controls creates an aggregation point for all strategic distributed-data governance requirements such as namings, master data management, security, or data privacy. Privacy and security support may affect both data sent outside the organization and data sometimes revised. Security integration has led some organizations to use centralized solutions for the whole group’s key management. At the same time, formal data-validation approaches can automatically identify and shield sensitive information before sending it outside the organization for specific applications such as fraud detection or money laundering [23].

5. Data Governance, Privacy, and Security in Real-Time Contexts

Real-time regulatory monitoring requires careful management of the data for monitoring, control, and alerting. The data needed for compliance can introduce privacy concerns or be subject to data governance requirements. To address these issues, monitoring and alerting components must be designed as a subsystem governed by rules and processes that are more applicable to adaptive systems than traditional information systems. Data governance rules and processes should be aligned with the regulatory monitoring subsystem under a continuous compliance mindset. Regulatory data management should be analogous to business-associated key risk indicators.

Decisions about the management of the focus regulation-associated data are key [24]. Sensitive data can either be anonymized or pseudonymized or processed in a manner that enforces their protection. Pseudonymization, for instance, is an appropriate method for specific types of regulatory monitoring processes, such as anti-money laundering, where the link to their belonging must be created for investigation purposes by the authority that manages the prevention of any illegal activity. Such decisions require a clear definition of the task to be executed on the monitored data streams: whether they deal with monitoring, alerting, or record-keeping for later audits, regulatory reporting, and forensic investigations.

5.1. Ensuring Data Integrity and Compliance in Real-Time Systems

The regulatory landscape surrounding data confidentiality and integrity is complex. Organizations collect, use, store, and transmit different types of data—customer, geographical, financial, personnel, transaction, etc.—belonging to multiple jurisdictions worldwide. The data are subject to conflicting requirements with respect to protection, retention, localization, and disclosure, the breach of which can expose organizations to severe reputational and financial damage. Breaches can occur not only as a result of malicious activity but also due to failure to comply with the specific regulations and requirements governing data use [25].

Real-time regulatory and tax compliance monitoring provides organizations with the ability to detect policy breaches as they occur. Nevertheless, unintended or malicious policy breaches remain a risk, and regulators routinely examine organizations’ systems and systems of event flows that could be indicative of such breaches. These external hygiene controls necessitate not only a data catalogue but also a data lineage repository that is capable of producing data quality reports. Maintaining such a repository in real time is expensive, as is responding to external regulators. Natural language processing (NLP) techniques can support the monitoring of data flows against data-related regulatory requirements, and organizations can periodically audit their real-time monitoring systems against natural language processing–generated rules and data quality reports using event-driven testing architectures.

6. Observability, Testing, and Reliability

Architectures for real-time regulatory monitoring not only must comply with stringent requirements but also must be safe and reliable to operate in porous transborder environments [26]. Consequently, a successful implementation would enable new monitoring capabilities in large financial organizations, enabling the use of innovative technologies while embodying principles of good governance, avoiding opaque black-box solutions, providing common sets of data privacy control for all actors, and adding incident control of transgressive behaviour. These characteristics encompass the most significant aspects of an Event-Driven Systems with the following considerations.

Event-driven architectures demand continuous observability in production to ensure compliance with business needs and the correct functioning of quality services [27]. However, the process of validating whether these conditions are being met is partly manual: dedicated teams need to check the observability layer continuously in order either to alert on violations or to execute more in-depth investigations. Shifting this activity left, in a way that makes violations of observability conditions automatically detectable during testing, would represent a major step towards true quality assurance during technical development.

Equation 3) “Near-real-time” monitoring equations (latency + decicitly discusses decision latency and near-real-time monitoring.

End-to-end decision latency

  1. Let te = timestamp woduced)
  2. Let ta = timestamp when an alert/decision is available to act on
  3. Decision latency: L=ta-te
  4. A “near-real-time” requirement is typically: LLmax

Streaming pipeline decomposition (useful in EDA observabponents:

L=Lingest+Lbroker+Lprocess+Lstore+Lnotify
6.1. Emerging Trends and Implications for the Future

The need to analyze large volumes of data in fragmented, potentially discontinuous time windows has importance far beyond fraud, credit risk, and liquidity management. Interest in the event processing development area continues to grow rapidly [28]. In the past few years, a significant expansion of communication protocols such as MQTT and AMQP has increased interest in Event-Driven Architectures. These protocols have emerged as a new architectural style that couples producers and consumers of data at a very fine-grained level, supporting loose coupling and decoupled composition across multiple granularity levels [29].

Emerging use cases for Event-Driven Architectures introduce new requirements for observability, testing, and reliability. The extreme fragmentation of system behavior into very fine-grained units creates a need to track how those units are called and how they interact. Existing testing strategies are less appropriate in an Event-Driven context, and the unreliability imposed by the supporting infrastructure introduces additional fault tolerance challenges [30]. The requirements of a streaming analytics processing system are often very different from those of traditional systems, yet the same system can evolve to cover both scenarios.

7. Conclusion

Observability, testing, and reliability are key considerations for software systems and services. These dimensions shape an organization’s ability to understand and reason about the behavior of systems and services regardless of their deployment architecture and style. The growing popularity of event-driven and streaming paradigms introduces new challenges and opportunities in these areas. While these paradigms enable interesting architectural styles and designs, they also make hypothesis-driven exploration and testing difficult. Techniques to ensure data quality, integrity, and compliance are emerging [31]. However, real-time monitoring, detection, and mitigation of privacy and security attacks remain major concerns.

Adopting event-driven architectures for regulatory real-time demand fulfillment opens up new investigation avenues. These approaches can be naturally articulated utilizing the principles of event-driven architectures. Although the proposed considerations, topics of interest, and avenues for future work focus on the domain of regulatory monitoring, it is easy to imagine other domains for which these factors are equally relevant.

7.1. Future Directions and Considerations

Written and spoken languages are a leading medium for communicating human thought. Substantial research effort has gone into automating the task of understanding and generating textual or spoken language [32]. The intent in following the scientific route is to marry these two technologies—systems able to analyze languages combined with systems that generate spoken language.

Despite the promise, the merging of text analysis with speech synthesis and speech recognition has not yet produced a fully automated, system capable of exploring and conversing spontaneously in a knowledge domain not previously encountered by the system [33]. Text and speech generation software is more versatile than either text or speech understanding software. However, the speech and text intelligibility of today's systems is far short of human performance. Although research on speech recognition and understanding has produced considerable knowledge and advanced the state-of-the-art, it has not yet overcome the many difficulties posed by spontaneous speech.

References

  1. Ackermann, F., Howick, S., Quigley, J., Walls, L., & Houghton, T. (2014). Systemic risk elicitation: Using causal maps to engage stakeholders and build a comprehensive view of risks. European Journal of Operational Research, 238(1), 290–299.[CrossRef]
  2. Dwaraka Nath Kummari, Srinivasa Rao Challa, "Big Data and Machine Learning in Fraud Detection for Public Sector Financial Systems," International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE), DOI: 10.17148/IJARCCE.2020.91221[CrossRef]
  3. Aggarwal, C. C., & Wang, H. (2010). Managing and mining graph data. Springer.[CrossRef]
  4. Allen, F., Gu, X., & Kowalewski, O. (2018). Financial structure, economic growth and development. Journal of Banking & Finance, 89, 1–3.[CrossRef] [PubMed]
  5. Botlagunta, P. N., & Sheelam, G. K. (2020). Data-Driven Design and Validation Techniques in Advanced Chip Engineering. Global Research Development (GRD) ISSN: 2455-5703, 5(12), 243-260.
  6. Albanese, D., Lo Duca, M., & Visintainer, R. (2019). Data quality and data governance in financial crime analytics. Journal of Financial Regulation and Compliance, 27(4), 475–492.
  7. Baeza-Yates, R., & Ribeiro-Neto, B. (2011). Modern information retrieval (2nd ed.). Addison-Wesley.
  8. Basel Committee on Banking Supervision. (2017). Sound management of risks related to money laundering and financing of terrorism. Bank for International Settlements.
  9. Meda, R. (2020). Designing Self-Learning Agentic Systems for Dynamic Retail Supply Networks. Online Journal of Materials Science, 1(1), 1-20.[CrossRef]
  10. Bennett, K. P., & Campbell, C. (2000). Support vector machines: Hype or hallelujah? SIGKDD Explorations, 2(2), 1–13.[CrossRef]
  11. Inala, R. Designing Scalable Technology Architectures for Customer Data in Group Insurance and Investment Platforms.
  12. Böhme, R., & Moore, T. (2012). The economics of cybersecurity. Journal of Cybersecurity, 1(1), 3–7.
  13. Gottimukkala, V. R. R. (2020). Energy-Efficient Design Patterns for Large-Scale Banking Applications Deployed on AWS Cloud. power, 9(12).
  14. Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5–32.[CrossRef]
  15. Buckley, K., & Webster, S. (2016). Sanctions compliance in international banking. Journal of Financial Compliance, 1(1), 23–37.
  16. Segireddy, A. R. (2020). Cloud Migration Strategies for High-Volume Financial Messaging Systems.[CrossRef]
  17. Chen, H., Chiang, R. H. L., & Storey, V. C. (2012). Business intelligence and analytics: From big data to big impact. MIS Quarterly, 36(4), 1165–1188.[CrossRef]
  18. Keerthi Amistapuram , "Energy-Efficient System Design for High-Volume Insurance Applications in Cloud-Native Environments," International Journal of Innovative Research in Electrical, Electronics, Instrumentation and Control Engineering (IJIREEICE), DOI 10.17148/IJIREEICE.2020.81209
  19. Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (2009). Introduction to algorithms (3rd ed.). MIT Press.
  20. Varri, D. B. S. (2020). Automated Vulnerability Detection and Remediation Framework for Enterprise Databases. Available at SSRN 5774865.[CrossRef]
  21. Dal Pozzolo, A., Bontempi, G., Snoeck, M., & Snoeck, M. (2015). Adversarial drift detection. IEEE Computational Intelligence Magazine, 10(4), 36–45.
  22. Vadisetty, R., Polamarasetti, A., Guntupalli, R., Rongali, S. K., Raghunath, V., Jyothi, V. K., & Kudithipudi, K. (2020). Generative AI for Cloud Infrastructure Automation. International Journal of Artificial Intelligence, Data Science, and Machine Learning, 1(3), 15-20.[CrossRef]
  23. Dean, J., & Ghemawat, S. (2008). MapReduce: Simplified data processing on large clusters. Communications of the ACM, 51(1), 107–113.[CrossRef]
  24. Dwork, C., & Roth, A. (2014). The algorithmic foundations of differential privacy. Now Publishers.[CrossRef]
  25. Rongali, S. K. (2020). Predictive Modeling and Machine Learning Frameworks for Early Disease Detection in Healthcare Data Systems. Current Research in Public Health, 1(1), 1-15.[CrossRef]
  26. Evans, D., & Over, M. (2019). Sanctions compliance and operational risk. Journal of Banking Regulation, 20(3), 243–257.
  27. Fayyad, U., Piatetsky-Shapiro, G., & Smyth, P. (1996). From data mining to knowledge discovery in databases. AI Magazine, 17(3), 37–54.
  28. Machine Learning Applications inRegulatory Compliance Monitoring forIndustrial Operations. (2020). Global Research Development(GRD) ISSN: 2455-5703, 5(12), 75-95. https://doi.org/10.70179/tqqm2y82[CrossRef]
  29. García-Molina, H., Ullman, J. D., & Widom, J. (2008). Database systems: The complete book (2nd ed.). Pearson.
  30. Meda, R. (2020). Data Engineering Architectures for Real-Time Quality Monitoring in Paint Production Lines. International Journal Of Engineering And Computer Science, 9(12).[CrossRef]
  31. Goldreich, O. (2009). Foundations of cryptography: Volume 2, basic applications. Cambridge University Press.
  32. Inala, R. (2020). Building Foundational Data Products for Financial Services: A MDM-Based Approach to Customer, and Product Data Integration. Universal Journal of Finance and Economics, 1(1), 1-18.[CrossRef]
  33. Groth, P., & Moreau, L. (2013). PROV-overview: An overview of the PROV family of documents. Future Generation Computer Systems, 29(1), 158–165.
Article metrics
Views
1
Downloads
0

Cite This Article

APA Style
Davuluri, P. S. L. N. (2020). Event-Driven Architectures for Real-Time Regulatory Monitoring in Global Banking. Universal Journal of Business and Management, 1(1), 1-14. https://doi.org/10.31586/ujbm.2020.1362
ACS Style
Davuluri, P. S. L. N. Event-Driven Architectures for Real-Time Regulatory Monitoring in Global Banking. Universal Journal of Business and Management 2020 1(1), 1-14. https://doi.org/10.31586/ujbm.2020.1362
Chicago/Turabian Style
Davuluri, P S L Narasimharao. 2020. "Event-Driven Architectures for Real-Time Regulatory Monitoring in Global Banking". Universal Journal of Business and Management 1, no. 1: 1-14. https://doi.org/10.31586/ujbm.2020.1362
AMA Style
Davuluri PSLN. Event-Driven Architectures for Real-Time Regulatory Monitoring in Global Banking. Universal Journal of Business and Management. 2020; 1(1):1-14. https://doi.org/10.31586/ujbm.2020.1362
@Article{ujbm1362,
AUTHOR = {Davuluri, P S L Narasimharao},
TITLE = {Event-Driven Architectures for Real-Time Regulatory Monitoring in Global Banking},
JOURNAL = {Universal Journal of Business and Management},
VOLUME = {1},
YEAR = {2020},
NUMBER = {1},
PAGES = {1-14},
URL = {https://www.scipublications.com/journal/index.php/UJBM/article/view/1362},
ISSN = {2770-1964},
DOI = {10.31586/ujbm.2020.1362},
ABSTRACT = {The global banking industry is subject to ever-growing regulatory requirements, designed to prevent financial tour de force repeats tearing through the world economy. The changes are incomplete and new rules being enacted each year. Implementing and executing these rules and regulations requires the guiding principles from senior management to reach the product desks in a clear and efficient way. Technical systems must implement these rules. Differences in interpretation, implementation, and warnings must be addressed during normal operations. Most importantly, systems must provide warning alerts to management and the business as early as possible, to allow for proper handling. History has shown that the importance of early warnings has been overlooked repeatedly. Real-time capabilities are essential to meet these business needs. Organizations must therefore be ready to embrace a next-generation architecture that enables real-time alert and warning generation. Systems based on a streaming architecture, combined with systems enabling the real-time flow of events between domains supported by orchestration, provide a solid foundation to meet these requirements.},
}
%0 Journal Article
%A Davuluri, P S L Narasimharao
%D 2020
%J Universal Journal of Business and Management

%@ 2770-1964
%V 1
%N 1
%P 1-14

%T Event-Driven Architectures for Real-Time Regulatory Monitoring in Global Banking
%M doi:10.31586/ujbm.2020.1362
%U https://www.scipublications.com/journal/index.php/UJBM/article/view/1362
TY  - JOUR
AU  - Davuluri, P S L Narasimharao
TI  - Event-Driven Architectures for Real-Time Regulatory Monitoring in Global Banking
T2  - Universal Journal of Business and Management
PY  - 2020
VL  - 1
IS  - 1
SN  - 2770-1964
SP  - 1
EP  - 14
UR  - https://www.scipublications.com/journal/index.php/UJBM/article/view/1362
AB  - The global banking industry is subject to ever-growing regulatory requirements, designed to prevent financial tour de force repeats tearing through the world economy. The changes are incomplete and new rules being enacted each year. Implementing and executing these rules and regulations requires the guiding principles from senior management to reach the product desks in a clear and efficient way. Technical systems must implement these rules. Differences in interpretation, implementation, and warnings must be addressed during normal operations. Most importantly, systems must provide warning alerts to management and the business as early as possible, to allow for proper handling. History has shown that the importance of early warnings has been overlooked repeatedly. Real-time capabilities are essential to meet these business needs. Organizations must therefore be ready to embrace a next-generation architecture that enables real-time alert and warning generation. Systems based on a streaming architecture, combined with systems enabling the real-time flow of events between domains supported by orchestration, provide a solid foundation to meet these requirements.
DO  - Event-Driven Architectures for Real-Time Regulatory Monitoring in Global Banking
TI  - 10.31586/ujbm.2020.1362
ER  - 
  1. Ackermann, F., Howick, S., Quigley, J., Walls, L., & Houghton, T. (2014). Systemic risk elicitation: Using causal maps to engage stakeholders and build a comprehensive view of risks. European Journal of Operational Research, 238(1), 290–299.[CrossRef]
  2. Dwaraka Nath Kummari, Srinivasa Rao Challa, "Big Data and Machine Learning in Fraud Detection for Public Sector Financial Systems," International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE), DOI: 10.17148/IJARCCE.2020.91221[CrossRef]
  3. Aggarwal, C. C., & Wang, H. (2010). Managing and mining graph data. Springer.[CrossRef]
  4. Allen, F., Gu, X., & Kowalewski, O. (2018). Financial structure, economic growth and development. Journal of Banking & Finance, 89, 1–3.[CrossRef] [PubMed]
  5. Botlagunta, P. N., & Sheelam, G. K. (2020). Data-Driven Design and Validation Techniques in Advanced Chip Engineering. Global Research Development (GRD) ISSN: 2455-5703, 5(12), 243-260.
  6. Albanese, D., Lo Duca, M., & Visintainer, R. (2019). Data quality and data governance in financial crime analytics. Journal of Financial Regulation and Compliance, 27(4), 475–492.
  7. Baeza-Yates, R., & Ribeiro-Neto, B. (2011). Modern information retrieval (2nd ed.). Addison-Wesley.
  8. Basel Committee on Banking Supervision. (2017). Sound management of risks related to money laundering and financing of terrorism. Bank for International Settlements.
  9. Meda, R. (2020). Designing Self-Learning Agentic Systems for Dynamic Retail Supply Networks. Online Journal of Materials Science, 1(1), 1-20.[CrossRef]
  10. Bennett, K. P., & Campbell, C. (2000). Support vector machines: Hype or hallelujah? SIGKDD Explorations, 2(2), 1–13.[CrossRef]
  11. Inala, R. Designing Scalable Technology Architectures for Customer Data in Group Insurance and Investment Platforms.
  12. Böhme, R., & Moore, T. (2012). The economics of cybersecurity. Journal of Cybersecurity, 1(1), 3–7.
  13. Gottimukkala, V. R. R. (2020). Energy-Efficient Design Patterns for Large-Scale Banking Applications Deployed on AWS Cloud. power, 9(12).
  14. Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5–32.[CrossRef]
  15. Buckley, K., & Webster, S. (2016). Sanctions compliance in international banking. Journal of Financial Compliance, 1(1), 23–37.
  16. Segireddy, A. R. (2020). Cloud Migration Strategies for High-Volume Financial Messaging Systems.[CrossRef]
  17. Chen, H., Chiang, R. H. L., & Storey, V. C. (2012). Business intelligence and analytics: From big data to big impact. MIS Quarterly, 36(4), 1165–1188.[CrossRef]
  18. Keerthi Amistapuram , "Energy-Efficient System Design for High-Volume Insurance Applications in Cloud-Native Environments," International Journal of Innovative Research in Electrical, Electronics, Instrumentation and Control Engineering (IJIREEICE), DOI 10.17148/IJIREEICE.2020.81209
  19. Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (2009). Introduction to algorithms (3rd ed.). MIT Press.
  20. Varri, D. B. S. (2020). Automated Vulnerability Detection and Remediation Framework for Enterprise Databases. Available at SSRN 5774865.[CrossRef]
  21. Dal Pozzolo, A., Bontempi, G., Snoeck, M., & Snoeck, M. (2015). Adversarial drift detection. IEEE Computational Intelligence Magazine, 10(4), 36–45.
  22. Vadisetty, R., Polamarasetti, A., Guntupalli, R., Rongali, S. K., Raghunath, V., Jyothi, V. K., & Kudithipudi, K. (2020). Generative AI for Cloud Infrastructure Automation. International Journal of Artificial Intelligence, Data Science, and Machine Learning, 1(3), 15-20.[CrossRef]
  23. Dean, J., & Ghemawat, S. (2008). MapReduce: Simplified data processing on large clusters. Communications of the ACM, 51(1), 107–113.[CrossRef]
  24. Dwork, C., & Roth, A. (2014). The algorithmic foundations of differential privacy. Now Publishers.[CrossRef]
  25. Rongali, S. K. (2020). Predictive Modeling and Machine Learning Frameworks for Early Disease Detection in Healthcare Data Systems. Current Research in Public Health, 1(1), 1-15.[CrossRef]
  26. Evans, D., & Over, M. (2019). Sanctions compliance and operational risk. Journal of Banking Regulation, 20(3), 243–257.
  27. Fayyad, U., Piatetsky-Shapiro, G., & Smyth, P. (1996). From data mining to knowledge discovery in databases. AI Magazine, 17(3), 37–54.
  28. Machine Learning Applications inRegulatory Compliance Monitoring forIndustrial Operations. (2020). Global Research Development(GRD) ISSN: 2455-5703, 5(12), 75-95. https://doi.org/10.70179/tqqm2y82[CrossRef]
  29. García-Molina, H., Ullman, J. D., & Widom, J. (2008). Database systems: The complete book (2nd ed.). Pearson.
  30. Meda, R. (2020). Data Engineering Architectures for Real-Time Quality Monitoring in Paint Production Lines. International Journal Of Engineering And Computer Science, 9(12).[CrossRef]
  31. Goldreich, O. (2009). Foundations of cryptography: Volume 2, basic applications. Cambridge University Press.
  32. Inala, R. (2020). Building Foundational Data Products for Financial Services: A MDM-Based Approach to Customer, and Product Data Integration. Universal Journal of Finance and Economics, 1(1), 1-18.[CrossRef]
  33. Groth, P., & Moreau, L. (2013). PROV-overview: An overview of the PROV family of documents. Future Generation Computer Systems, 29(1), 158–165.