$8000+ Mn, Intelligent Document Processing Market Size to Grow at … – GlobeNewswire
October 12, 2022 09:00 ET | Source: The Insight Partners The Insight Partners
Pune, INDIA
New York, Oct. 12, 2022 (GLOBE NEWSWIRE) — The Insight Partners published latest research study on “Intelligent Document Processing Market Size, Share, Growth, Industry Trends and Forecast to 2028,” the global intelligent document processing market size is expected to grow from USD 1,022.73 million in 2021 to USD 8,045.81 million by 2028, with an estimated CAGR of 34.6% from 2022 to 2028.
Download Sample PDF Brochure of Intelligent Document Processing Market Size – COVID-19 Impact and Global Analysis with Strategic Insights at: https://www.theinsightpartners.com/sample/TIPRE00028611/
Global Intelligent Document Processing Market Report Scope, Segmentations, Regional & Country Scope:
Global Intelligent Document Processing Market: Competitive Landscape
ABBYY; IBM Corp; Kofax Inc.; Datamatics Global Services Limited; Appian; WorkFusion, Inc.; Parascript; Open Text Corporation; Hyland Software, Inc.; and Extract Systems are among the key intelligent document processing market players profiled during the intelligent document processing market study. Several other major companies were studied and analyzed during this research study to get a holistic view of the intelligent document processing market and its ecosystem.
Inquiry Before Purchase: https://www.theinsightpartners.com/inquiry/TIPRE00028611/
Companies in the intelligent document processing market offer different products for various types of document processing. These products help streamline document and data flow, which helps in ensuring swift decision-making. For instance, in the retail industry, these products offer measurable and consistent business value to processes such as customer correspondence and claims, sales order processing, and accounts payable. The document processing platform reads the unstructured documents, further redacting or extracting the information the customer needs, and routes the data to the final destination. The document processing offered by the companies in the market lessens the time spent manually, reduces human error typically caused by manual data entry, and offers fast access to valuable discrete data that customers can compare, share, report, and analyze, contributing to the intelligent document processing market growth.
The rising demand for the extraction of insights from unstructured data is catalyzing the growth of the intelligent document processing market. The awareness related to the advantages of intelligent document processing solutions is maximum among large enterprises, which generate enormous amounts of data daily. These large enterprises invest substantial amounts toward enhancing the respective business process and optimizing the efficiency of the business. Thus, the demand for intelligent document processing solutions among large enterprises is on a constant rise, which is allowing the intelligent document processing market players to experience growth in respective sales, ultimately driving the intelligent document processing market.
Speak to Research Expert – https://www.theinsightpartners.com/speak-to-analyst/TIPRE00028611
Most business data in recent years is unstructured, and end users face significant challenges in gathering meaningful insights from the data. The data is represented in various modules, including documents, emails, spreadsheets, audio & visual, presentations, images, and web searches. In order to gain maximum information from unstructured data, several end users or industries are opting for intelligent document processing solutions, which facilitates them in extracting important insights.
In the intelligent document processing market, end-use industries such as BFSI and manufacturing, and government- across the different regions have attributed in the solution popularity. For instance, the strong presence of manufacturing and BFSI sectors in China and India has witnessed substantial demand for robust and efficient document processing tools in the past few years. North America holds the highest share in the intelligent document processing market as most of the companies in the region have already shifted to digital transformation to compete effectively in the global intelligent document processing market.
Furthermore, several other countries are increasing their adoption of digital transformation technology to compete effectively in the global market and increase revenue growth. Moreover, the demand for document processing solutions is predicted to grow during the forecast period, owing to the need for automated operations from manufacturing industries in several emerging economies of APAC, contributing to the intelligent document processing market growth in the region.
Quickly Purchase Premium Copy of Global Intelligent Document Processing Market Growth Report (2022-2028) at: https://www.theinsightpartners.com/buy/TIPRE00028611/
Browse Adjoining Reports:
Document Imaging Market Forecast to 2028 – COVID-19 Impact and Global Analysis by Component (Hardware, Software); Deployment Type (Cloud, On-premises); End-user (Educational Institutions, Government, Enterprises, Law Firms, Others) and Geography
Document Reader Market Forecast to 2028 – COVID-19 Impact and Global Analysis By Product Type (Passport, E-Passport, Visa, IDs, Licenses, Others); Technology (NFC/RFID, Barcode, Contact SmartCard, Others); Application (Border Crossings, Airports, Train/ Bus Terminals, Banks, Travel agencies, Others) and Geography
Document Analysis Market to 2027 – Global Analysis and Forecasts by Solutions (Products and Services); Deployment Type (Cloud and On-premise); Organization Size (Large Enterprises and Small & Medium Enterprises (SMEs)); Industry Vertical (BFSI, Government, Healthcare, Retail, Manufacturing, and Others)
Document Management Software Market Forecast to 2028 – Covid-19 Impact and Global Analysis – by Component (Solutions, Services), Deployment Mode (On-Premise, Cloud-Based), Organization Type (SME and Large Enterprise), Application (Healthcare, BFSI, Government, Education, Retail, and others) and Geography
Document Drafting Platform Market Forecast to 2028 – COVID-19 Impact and Global Analysis By Deployment Type (Cloud, On Premise); End User (Individual, Enterprise) and Geography
Medical Document Management Systems Market Forecast to 2028 – Covid-19 Impact and Global Analysis – By Application (Image Management, Patient Medical Records Management, Admission and Registering Documents Management, Patient Billing Documents Management); Solution (Document Scanning Software, Document Management Software); Mode of Delivery (Cloud Based, Web Based, On – Premise Model); End User (Hospitals and Clinics, Insurance Providers, Nursing Homes, Other End Users) and Geography
Document Scanner Market Forecast to 2028 – Covid-19 Impact and Global Analysis – by Product Type (Sheetfed Scanners, Handheld, Flatbed); Enterprise Size (SMEs, Large Enterprises); Industry Vertical (BFSI, IT and Telecom, Healthcare, Education, Transportation and Logistics, Others) and Geography
Document Camera Market Forecast to 2028 – COVID-19 Impact and Global Analysis by Connection Type (Wired, Wireless); End-Use (Education, Corporate, Others) and Geography
Medical Record Management Market Forecast to 2028 – Covid-19 Impact and Global Analysis – by Component (Software, Services); Application (Patient Record Management, Admission and Registration Document Management, Patient Billing Document Management, Others); Deployment (Cloud, On-Premise); End User (Hospitals and Clinics, Nursing Homes, Healthcare Payers, Others) and Geography
About Us:
The Insight Partners is a one stop industry research provider of actionable intelligence. We help our clients in getting solutions to their research requirements through our syndicated and consulting research services. We specialize in industries such as Semiconductor and Electronics, Aerospace and Defense, Automotive and Transportation, Biotechnology, Healthcare IT, Manufacturing and Construction, Medical Device, Technology, Media and Telecommunications, Chemicals and Materials.
Contact Us:
If you have any queries about this report or if you would like further information, please contact us:
Contact Person: Sameer Joshi
E-mail: sales@theinsightpartners.com
Phone: +1-646-491-9876
Press Release: https://www.theinsightpartners.com/pr/intelligent-document-processing-market
Industry Research: https://www.biospace.com/employer/2309254/tip-knowledge-services-pvt-ltd-/
- Published in Uncategorized
Everything You Need to Know About Version Control – Spiceworks News and Insights
Version control tracks the progress of code across development and iterations and also aids in managing changes during the lifecycle.
Version control is a system that tracks the progress of code across the software development lifecycle and its multiple iterations – which maintains a record of every change complete with authorship, timestamp, and other details – and also aids in managing change. This article details how version control in DevOps works, the best tools, and its various advantages.
Version control is defined as a system that tracks the progress of code across the software development lifecycle and its multiple iterations – which maintains a record of every change complete with authorship, timestamp, and other details – and also aids in managing change.
The process of monitoring and managing changes to software code is known as version control, also sometimes referred to as revision control or source control systems. Software technologies called version control systems assist software development teams in tracking changes to source code over time.
Version control systems enable software teams to operate more swiftly and intelligently as development environments have increased. They are beneficial for DevOps teams because they will allow them to speed up successful deployments and reduce development time.
Version control pinpoints the trouble spots when developers and DevOps teams work concurrently and produce incompatible changes so that team members can compare differences or quickly determine who committed the problematic code by looking at the revision history. Before moving on with a project, a software team can use version control systems to resolve a problem.
Software teams can understand the evolution of a solution by examining prior versions through code reviews. Every alteration to the code is recorded by version control software in a particular type of database. If an error is made, developers can go back in time and review prior iterations of the code to remedy the mistake while minimizing disturbance for all team members.
Collaboration among employees, keeping several iterations of information created, and data backup are just a few issues that any global organization may encounter. For a business to succeed, developers must overcome each of these issues. A version control system is then necessary for this situation.
The first version control system was mainframe-based, and each programmer used a terminal to connect to the network. The first server-based, or centralized, version control systems that utilized a single, shared repository were introduced on UNIX systems; later, these systems were made accessible on MS-DOS and Windows.
Versions can be recognized by labels or tags, and baselines can be used to mark approved versions or versions that are particularly important. Versions that have been checked out can be used as a branching point for code from the main trunk by various teams or individuals. The first version to check in will always win when versions are checked out and checked in.
Some systems may offer version merging if other versions are checked out so that one can upload new modifications to the central repository. Branching is a distinct approach to version control where development programs are duplicated for parallel versions of development while keeping the original and working on the branch or making separate modifications to each.
Each copy is called a branch, and the original program from where it was derived is known as the trunk, the baseline, the mainline, or the master. Client-server architecture is the standard model for version control. Another technique is distributed version control, where all copies are kept in a codebase repository, and updates are made by sharing patches or modifications across peers. Version control allows teams to work together, accelerate development, settle issues, and organize code in one place.
See More: What Is Jenkins? Working, Uses, Pipelines, and Features
Globally, version control systems comprise a multi-billion-dollar industry, poised to reach $716.1 million by 2023 (as per MarketsAndMarkets research). In this massive market, 13 tools stand out. They are:
Software that carries out software version control, configuration management, and change management tasks is known as Configuration Management Version Control (CMVC). This system was client-server based, with servers for several Unix flavors and command-line and graphical clients for many platforms. Even after renaming a file, it can track file history. This is because developers may alter the database filename and the filename on the disk was a number. Delegating power is possible thanks to its decentralized administration.
Git is among the most powerful version control programs now on the market. The creator of Linux, Linus Torvalds, created the distributed version control system known as Git. Its memory footprint is minimal and can follow changes in any file. When you add this to its extensive feature set, you get a full-featured version control system that can handle any project. Due to its simple workflow, it is employed by Google, Facebook, and Microsoft.
A version control system called Apache Subversion, which is free and open-source, enables programmers to manage both the most recent and previous iterations of crucial files. It can track modifications to source code, web pages, and documentation for large-scale projects. Subversion’s main features are workflow management, user access limits, and cheap local branching. Both commercial products and individual projects can be managed using Subversion, a centralized system with many powerful features. It is one of Apache’s many open-source solutions, like Apache Cassandra.
You can utilize all Azure DevOps services or just the ones you require to improve your current workflow. A group of software development technologies you can use in conjunction is Azure DevOps Server, formerly Team Foundation Server (TFS). In addition to access controls and permissions, bug tracking, build automation, change management, collaboration, continuous integration, and version control are all elements of the source code management program known as Azure DevOps Server.
One of the first version control systems developed, CVS is a well-known tool for open-source and commercial developers. You can use it to check in and out the code you intend to work on. Teams can integrate their code modifications and add distinctive features to the project. CVS uses delta compression to effectively compress version differences and a client-server architecture to manage change data. In larger projects, it saves a lot of disk space.
See More: What Is Serverless? Definition, Architecture, Examples, and Applications
Developers and businesses adore Mercurial for its search capabilities, backup system, data import and export, project tracking and management, and data migration tool. The free source control management program Mercurial supports all popular operating systems. It is a distributed versioning solution and can easily manage projects of any size. Through extensions, programmers can quickly expand the built-in functionality. For software engineers, source revisioning is made simpler by its user-friendly and intuitive interface.
Software development teams may collaborate and keep track of all code changes using GitHub. You can keep track of code modifications, go back in time to correct mistakes, and collaborate with other team members. The most reliable, secure, and scalable developer platform in the world is GitHub. You receive the best resources and services to assist you in creating the most cutting-edge communities possible. The most reliable, secure, and scalable developer platform in the world is GitHub.
Private Git repositories are hosted by the managed version control system AWS CodeCommit. It smoothly integrates with other Amazon Web Services (AWS) products, and the code is hosted in secure AWS settings. Therefore, it’s a suitable fit for AWS’s current users. Access to various helpful plugins from AWS partners is also made available through AWS integration, aiding in program development. You don’t have to worry about maintaining or scaling your source control system when you use CodeCommit.
As a component of the Atlassian software family, Bitbucket can be connected with other Atlassian products like HipChat, Jira, and Bamboo. Some of Bitbucket’s key features are code branches, in-line comments and debate, and pull requests. The company’s data center, a local server, or the cloud can all be used for its deployment. With Bitbucket, you can freely connect with up to five people. This is advantageous because you can use the platform without spending any money.
RhodeCode is a platform for managing public repositories. RhodeCode offers a contemporary platform with unified security and tools for any version control system, in contrast to old-fashioned source code management systems or Git-only tools.
The platform is designed for behind-the-firewall enterprise systems that require high levels of security, sophisticated user management, and standard authentication. RhodeCode has a convenient installer, it may be used as a standalone hosted program on your server, and its Community Edition is unrestrictedly free.
CA Panvalet establishes and maintains a control library of source programs, centralizes the storage of the source, and offers quick access for maintenance, control, and protection against loss, theft, and other perils. Like Microsoft Visual SourceSafe for personal computers, Panvalet is a closed-source, proprietary system for controlling and versioning source code. Users check out files to edit and then check them back into the repository using a client-server architecture.
It offers the sole source of accuracy for all development. The company behind it is Perforce Software Inc. It is a networked client-server revision control tool. It supports several operating systems, including OS X, Windows, and Unix-like platforms. This tool is primarily used in large-scale development setups. Through the tracking and management of changes to source code and other data, it streamlines the development of complicated products. Your configuration changes are branched and merged using the Streams feature.
GNU Bazaar (formerly Bazaar-NG Canonical) is a command-line utility by the company that created Ubuntu, and it is a distributed and client-server revision control system. Numerous contemporary projects use it, including MySQL, Ubuntu, Debian, the Linux Foundation, and Debian. GNU Bazaar is truly cross-platform, running on every version of Linux, Windows, and OS X. High storage efficiency, offline mode support, and external plugin support are some of GNU Bazaar’s finest qualities. Additionally, it enables a wide range of development workflows.
Using a version control system, one can obtain the following benefits:
Benefits of Version Control
It goes without saying that team members should work simultaneously, but even individuals working alone can profit from being able to focus on separate streams of change. By designating a branch in VCS tools, developers and DevOps engineers can keep several streams of work separate while still having the option to merge them back together to ensure that their changes don’t conflict.
Many software development teams use the branching strategy for every feature, every release, or both. Teams have various workflow options to select from when deciding how to use the branching and merging features in a VCS.
The development of any source code is continuous in the modern world. There are always more features to be added, more people to target, and more applications to create. When working on a software project, teams frequently have various main project clones to build new features, test them, and ensure they work before uploading this new feature to the main project. Due to the ability to develop several sections of the code concurrently, this could save time.
The team tasked with the project consistently generates new source codes and makes changes to the already existing code. These modifications are kept on file for future use and can be consulted if necessary to determine the true source of a given issue. If you have a record of the changes made in a particular code file, you and new contributors may find it easier to comprehend how a specific code section came to be. This is vital for working efficiently with historical code and allowing developers to predict future work with accuracy.
This refers to every modification made over time by numerous people. File addition, deletion, and content modifications are all examples of changes. The ease with which various VCS programs handle file renaming and movement vary. You should also include the author, the date, and written comments outlining the rationale behind each change in this history.
The ability to go back to earlier iterations allows for the root cause study of faults, which is essential when fixing issues with software that is more than a few years old. Nearly everything can be regarded as an “earlier version” of the software if it is still being developed.
Since pushing and pulling cannot be done using a distributed version control system without an internet connection, most development can be done on the go, away from home, or in an office. Contributors will make changes to the repository and can view the running history on their hard drives.
With more flexibility, the team can resolve bugs with a single change-set, increasing developers’ productivity. Developers can do routine development tasks quickly with a local copy. With a DVCS, developers can avoid waiting on a server to do everyday activities, which can impede delivery and be inconvenient.
See More: Top 10 DevOps Automation Tools in 2021
Whenever a contributor copies a repository using a version control system, they are essentially making a backup of the repository’s most recent version, which is probably its most significant advantage. We can protect the data from loss in the event of a server failure by having numerous backups on various workstations.
Unlike a centralized version control system, a distributed version control system does not rely on a single backup, increasing the reliability of development. Although it’s a widespread fallacy, having numerous copies won’t take up much space on your hard drive because most development involves plain text files and most systems compress data.
An open line of communication between coworkers and teams results from version control because sharing code and being able to track past work results in transparency and consistency. It makes it possible for the different team members to coordinate workflow more straightforwardly. There are repercussions from this better communication.
Team members can operate more productively as a result of effective workflow coordination. They can more easily manage changes and work in harmony and rhythm. This presents the many team members as a single entity that collaborates to achieve a particular objective.
Management can get a thorough picture of how the project is doing thanks to version control. They know who is responsible for the modifications, what they are intended to accomplish when they are completed, and how the changes will affect the document’s long-term objective. It helps management spot persistent issues that particular team members could bring on.
The accurate change tracking provided by version control is a great way to get your records, files, datasets, and/or documents ready for compliance. To manage risk successfully, keeping a complete audit trail is essential. Regulatory compliance must permeate every aspect of a project. It requires identifying team members who had access to the database and accepting accountability for any changes.
The seamless progress of the project is ensured by version management. Teams can collaborate to simplify complex processes, enabling increased automation and consistency and progressive implementation of updated versions of these complex procedures. The updated versions allow programmers to revert to a previous version when errors are found. Testing is simpler if you go back to an earlier version because bugs are caught sooner and with less user impact.
Having many outdated versions of the same document can be prevented with version management. Errors brought on by information displayed inconsistently across different papers will therefore be diminished. One should convert absolute versions of documents to a “read-only” state after the evaluation is complete. It will restrict the possible modifications and leave little possibility for mistakes in the future.
See More: DevOps vs. Agile Methodology: Key Differences and Similarities
Version control systems are a vital component of modern-day software development. It helps maintain a reliable source code repository and ensures accountability no matter who works on the code. It also makes finding and addressing bottlenecks easier by simplifying the root cause analysis process. Ultimately, version control enables a single pane of glass for collaborative and iterative application development in short release cycles.
Did this article tell you all you needed to know about version control? Tell us on Facebook, Twitter, and LinkedIn. We’d love to hear from you!
Technical Writer
- Published in Uncategorized
Best practices for construction document management – Planning, BIM & Construction Today
Many contractors use a mix of paper and digital documents, and even if they’ve gone fully digital, they may rely on several different software applications. Many contractors still rely on basic digital tools like spreadsheets to manage their projects.
On the whole, this trend toward digital is good. It means contractors want to make documents more accessible to their teams. But for real success with document control in construction to happen, we need to identify what that work encompasses and focus on solving the remaining challenges of document storage and accessibility.
Construction document management is the general process that a construction manager or project manager might use to organise and store contracts, blueprints, permits, and other documents necessary to day-to-day operations.
These days, filing cabinets have been replaced in many businesses with construction software that digitises data and helps to store and share it quickly with those who need it in the field via electronic forms, dashboards, plans, drawings, specs and much more.
Construction document management styles may differ from individual to individual, but the best practices we’ve laid out below can help construction managers identify the best ways to ensure no one is left hanging when they need information.
Document management is not a trivial thing. These documents play a fundamental role in construction. But many common challenges arise related to document management, even in modern construction organisations:
The first step to getting a handle on document management at your organisation is to centralise your data and documents. A connected, cloud-based software solution that provides access to the most current project documents in real-time makes it easier for all members of your project teams to find what they need and execute a project correctly.
At Trimble Viewpoint, we often discuss the importance of having one accurate data source, and document management is no exception. How can you confidently say things will be done correctly unless everyone uses the same information?
Organising a tricky file structure can be made easier by swapping out paper records for digital files. In particular, those that can be updated through your construction software system in real-time to present the latest information are easier for multiple stakeholders to access.
Construction document management software that connects all of the necessary components of a project is crucial for successful teams.
Next, you must make documentation readily available to everyone on your project team who needs it. Cloud-based document storage and connected construction workflows allow your team to access needed data and documents in the field, often through mobile-friendly applications that work directly on smartphones or tablet devices.
Viewpoint For Projects, for example, can be accessed both by using a computer in the office, or a tablet or smartphone out in the field, providing the same degree of functionality no matter where work takes construction professionals.
This allows for real-time sharing and viewing of important documents. It has a customisable folder structure, so it’s simple to navigate and includes a complete version history and audit trail to see who’s made changes to documents. Viewpoint For Projects users can also mark up PDFs in their browser, so it’s easy to leave notes and get questions answered.
After you have centralised your data with a connected software suite and provided remote access to those who need that information most, it’s time to coordinate how information moves from one team member to another — and to standardise these workflows for all construction project data.
Ensure that the following assets and processes are in place:
Too many software systems that don’t integrate can become difficult to coordinate and optimise document management workflows. Ultimately, it can cause the same efficiency problems you were trying to solve in the first place.
However, with a connected, cloud-based construction management suite, most of the aforementioned workflows are built into the data and documentation capabilities. For instance, financial data entered into accounting workflows can auto-populate project management forms or reports and vice versa.
A centralised, reliable document management solution ultimately enables better collaboration for everyone working on your projects and gives you more control over documentation. You won’t have to worry about inaccurate data floating around and leading to mistakes on your job sites.
Once you connect your data and document workflows in real-time, you’ll be surprised just how much easier your daily tasks are to complete, how much smoother your projects go, and how much more profitable your business is.
document.getElementById( “ak_js_1” ).setAttribute( “value”, ( new Date() ).getTime() );
- Published in Uncategorized
API Evolution Without Versioning with Brandon Byars – InfoQ.com
Live Webinar and Q&A: Panel: 2023 Data Engineering Trends and Predictions (January 19, 2023) Save Your Seat
Facilitating the Spread of Knowledge and Innovation in Professional Software Development
Windows Services play a key role in the Microsoft Windows operating system, and support the creation and management of long-running processes. When “Fast Startup” is enabled and the PC is started after a regular shutdown, though, services may fail to restart. The aim of this article is to create a persistent service that will always run and restart after Windows restarts, or after shutdown.
Everyone likes the idea of building something new. So much freedom. But what about making changes after you have users? In this episode, Thomas Betts talks with Brandon Byars about how you can evolve your API without versioning, a topic he spoke about at QCon San Francisco.
Sara Bergman introduces the field of green software engineering, showing options to estimate the carbon footprint and discussing ideas on how to make Machine Learning greener.
Software engineers should accept their responsibility to take energy consumption and carbon dioxide emissions into account when developing software, they have a big responsibility towards nature, our environment and sustainability. This article sheds light on how software engineers can this perspective into account, zooming in on energetic shortcomings or bottlenecks of bugs.
GitHub Actions is an effective CI tool. However, integrating it into enterprise organizations can be challenging. This article looks at best practices for GitHub Actions in the enterprise.
Learn how to achieve high-level observability without picking and choosing which logs to collect. Register Now.
Adopt the right emerging trends to solve your complex engineering challenges. Register Now.
Your monthly guide to all the topics, technologies and techniques that every professional needs to know about. Subscribe for free.
InfoQ Homepage Podcasts API Evolution Without Versioning with Brandon Byars
Jan 09, 2023
Podcast with
by
Everyone likes the idea of building something new. So much freedom. But what about making changes after you have users? In this episode, Thomas Betts talks with Brandon Byars about how you can evolve your API without versioning, a topic he spoke about at QCon San Francisco.
Teleport is the easiest, most secure way to access infrastructure. Get started today.
Transcript
Hi everyone. Registration is now open for QCon London 2023 taking place from March 27th to the 29th. QCon International Software Development Conferences focus on the people that develop and work with future technologies. You'll learn practical inspiration from over 60 software leaders, deep in the trenches, creating software, scaling architectures, and fine tuning their technical leadership to help you adopt the right patterns and practices. Learn more at qconlondon.com.
Thomas Betts: Everyone likes the idea of building something new, so much freedom. But what about making changes after you have users? Today I'm talking with Brandon Byers about how you can evolve your API without versioning the topic he spoke about at QCon San Francisco. Brandon is a passionate technologist, consultant, author, speaker, and open source maintainer. As head of technology for Thoughtworks North America, Brandon is part of the group that puts together the Thoughtworks technology radar, a biannual opinionated perspective on technology trends. He is the creator of Mountebank, a widely used service virtualization tool and wrote a related book on testing microservices. Brandon, welcome to the InfoQ podcast.
Brandon Byars: Oh thanks. Happy to be here.
Thomas Betts: I set this up a little in the intro. Let's imagine we have a successful API and it's in use by many people and other services calling it, but now it's time to make a change. In general, if we're adding completely new features, that's easy, but when we need to change something that's already being used, that's when we run into trouble. Why is that so difficult and who's impacted by those changes?
Brandon Byars: Yes, it's a really hard problem and often the pain of absorbing the change is often overlooked. So let's start with that second question first. The API consumers, when you see a new major version, regardless of how that's represented as an API versioning or SemVer or some equivalent, that's indicative of breaking changes because the API producer wanted to either fix something or change the contract in a breaking way, that is work for you to consume. And that work is oftentimes easy to overlook because it's federated amongst the entire population of API consumers. And a lot of times you don't even have a direct connection with them for a public API like Mountebank as a public command line tool and it's a hybrid REST API with some interesting nuance behind it.
The standard strategy that you always hear about is versioning and of course versioning works. You can communicate to the consumers that they need to change their code to consume the breaking changes in the contract. But that is work, that is friction. And what I tried to do very intentionally with Mountebank, which is open source, so I had a bit more room to play, it's just a volunteer project, was really try to come up with strategies outside of versioning that make that adoption easier because you're not frustrated with changes over time. And Mountebank itself is nine-years-old. It, itself, depends on APIs. It's a node JS project, so it depends on node JS libraries.
And I've spent more volunteer nights and weekends time than I care to admit not adding features, simply keeping up with changes to some of the library APIs that had breaking changes because they legitimately cleaned up their interface but they cleaned up the interface at the cost of me doing additional work and that adds up over time. And so I really pushed hard to come up with other strategies that still allow me to improve the interface over time or evolve it in ways that would typically be a breaking change, but without forcing the consumers to bear through the work associated with that breaking change.
Thomas Betts: And I like how you mentioned in that case, were a consumer that's also a producer. A lot of us, software developers straddle both lines. We're creating something that someone else consumes and sometimes that's a customer facing product, it's a UI, but sometimes it is an API that's a product, which is more like what you're describing with Mountebank.
Brandon Byars: Yes, of course, API is a broad term, application programming interface. So I mentioned no JS libraries, those are in process and the JavaScript function definition for example might be the interface. Mountebank has a REST API, but it also has an embedded programmable logic inside of it that is similar to what you might expect as a Java function interface because you can pass in Java functions inside it as well. So it works on a couple different levels of that. But you're absolutely right, it is a API, it's a product released publicly. I don't have a direct line of communication to each of the individual users of it. I do have a support channel, but I would prefer, for my own sanity, that they don't use the support channel for just simple upgrade options. I would prefer to take that work off of both them and me in terms of the hand holding around it.
Thomas Betts: And so what exactly is Mountebank and then why was it a good system that allowed you to explore these ways of how to evolve an API?
Brandon Byars: Mountebank is what's called a service virtualization tool. And that phrase I stumbled across after writing Mountebank, I hadn't come across it previously that I considered it an out of process stub. So if you're familiar with a JMock, or one of those mocking tools that's in process stubbing, this allows you to take that out of process. So if I want to have black box tests against my application and my application has run time dependencies on another service that another team maintains, perhaps, anytime I run my tests against that service, I need an environment where both my application and the service are deployed. Especially if another team is controlling the release cycle of that dependency, then you can introduce non-determinism into your testing.
And so service virtualization allows you, and testing your application to directly control the responses from the dependent service so that you can test both happy paths, and it's much easier to test exceptional scenarios once you understand what the real service should respond like in those exceptional scenarios to test the sad paths as well, allowing you a lot more flexibility and test data set up, test determinism.
And of course, it still needs to be balanced with other tests approaches like contract testing to validate your environmental assumptions. But it allows you to give higher level tests, integration or service or component tests with the same type of determinism that we're used to in process.
So why is it a good platform for exploring these concerns? Part of that is just social. It's a single owner open source product. I manage it so I have full autonomy to experiment, is also because of the interesting hybrid nature of it that I mentioned previously where it's both the REST API that you can start up with or the command line interface that listens on a socket and exposes arrest API and of course, it can spin up other sockets because those need to be the virtual services that you're configuring.
And the programmable nature of it where you can pass in certain JavaScript under certain conditions that try to cover off security concerns allows for some really interesting evolutions of both what you would normally represent on something like an OpenAPI specification. And recognize an OpenAPI specification will never be rich enough to give you the full interface of the programmable interface that's embedded inside the REST interface. So it allowed me to explore a lot of nuance around what it means to provide an API specification and have the autonomy to do that. And a tool that I was fortunate had some pretty healthy adoption early on. So I was doing this in the face of real users or in the natural course of work with real users not trying to do something artificial that was just a science experiment on the side.
Thomas Betts: So one of the things we usually talk about APIs, we describe them as contracts, but I remember in your QCon talk, you said that the better word was promises. Can you explain the difference there?
Brandon Byars: Yes, and it's really just trying to set expectations with users the right way, and have a more nuanced conversation around what we mean by the interface of an API. So we talk about contracts and we have specifications and of course, if you remember, we went through that awkward transition from SOAP to REST in 2008-era time frame, we really didn't have any specification language for REST. There was a lot of backlash against WSDL for SOAP. It was very verbose and so we went for a few years without having some standard like what Swagger ultimately became.
So we had some room in my career that I was part of where we experimented without these contracts, but we obviously still had an interface and we would document that maybe on wikis or whatever that might be to try to give consumers an indication of how to use the API. We could get so far with that, it still had flaws in it. And so we filled that hole appropriately with tools like Swagger OpenAPI. There were other alternatives that allowed us to communicate with consumers in a way that allowed us more easily to builds STKs that allowed generic tools like the graphical UI that you might see on a webpage that described the documentation around it for Swagger docs, but it's never rich enough to really define the surface area of the API.
And that is particularly true when you have a complex API like Mountebank with an embedded programmable interface inside of it, because now you're talking about what is just a string on the JSON interface. But inside that string might be a function declaration that also has to have a specific interface for it to work inside the JavaScript context that it's executed inside of. And that's an example, but it's a more easily spotted example than what tends to happen even when you don't have a programmable interface, because you still have edge cases of your API that are always difficult to demonstrate through the contract.
And this idea of promises came out of the configuration management world. Mark Burgess, who helped create CFEngine, one of the early progenitors to Puppet and Chef and the modern infrastructure-as-code practices, defined a mathematical theory around promises that allowed him to build CFEngine. But it was really also a recognition that promises can be broken in the real world. When I promise you something, what I'm really signaling is I'm going to make a best faith effort to fulfill that promise on your behalf. And that's a good lens to think about APIs because under load, under exceptional circumstances, they will respond in ways that the producers could not always predict. And if we walk into it with this mentality, this architectural ironclad mentality that the contract directly specifies with the API is, how it's going to behave, we're missing a lot of nuance. It allows us to have richer conversations around API evolution.
Thomas Betts: I want to go back, you said there's a lot about communication and that's where you got in your talk about the evolution patterns and different ways to evolve an API. You had criteria and communication seemed to be the focal point of that and architects love to discuss trade-offs. What are the important tradeoffs and evaluation criteria that we need to consider when we're looking at these various evolution patterns?
Brandon Byars: There's an implicit one and I didn't talk about it much because it's the one that everybody's familiar with and that is implementation complexity. A lot of the times, we version APIs because we want to minimize implementation complexity and the new version, the V2, allows us to delete a bunch of now dead code so that we, as the maintainers of it, don't have to look at it.
What I tried to do was look at criteria from a consumer's perspective and the consumers don't care what the code inside your API looks like.
I listed three dimensions. The first one I called obviousness. A lot of times goes by the name and the industry of the principle of least surprise. Does the API and the naming behind the fields and the nesting structure and the endpoint layout, does it match your intuitive sense of how an API should respond? Because that eases the adoption curve. That makes it much easier to embrace and you always have the documentation as a backup, but if it does what you expect, because we, as developers or tinkerers, we're experimenters, that's how we learn how to work through an API. Obviousness goes a long way towards helping us adopt it cleanly.
I listed a second one that I called elegance, which is really just a rough proxy for usability and the learning curve of the API, consistency of language, consistency of style, the surface area of the API. A simple way to avoid versioning for example is to leave Endpoint1 and just call it Endpoint1V2 and have a separate endpoint, that allows you to not version. And it's a legitimate technique, but it decreases elegance because now you have two endpoints that the consumer has to keep in mind and have some understanding of the evolution of the API over time as an example.
And then the third one is stability, which is how much effort a consumer has to put in to keeping up with changes of the API over time. And of course, versioning that's stable within the version, but oftentimes requires effort to move between versions. Some of the techniques that I talked about in the talk meet stability to varying degrees. Sometimes, it can't be a perfect guarantee of stability. This is where the promise notion kicks in, but can make a best faith effort of providing a stable upgrade path to consumers.
Thomas Betts: So that gets us to the meat of your talk was about these evolution patterns. I don't know if we'll get through all of them, but we'll step through as many as we can in our time. The first was change by addition, which the intro I said is considered the easy and safe thing to do. But can you give us an example and talk about the pros and cons of when you would or wouldn't want to change by addition?
Brandon Byars: Yes, the simplest example is just adding a new field. It's simply adding a new object structure into your API and that it should not be a breaking change for consumers. Of course, there are exceptions where it will be if they have strict deserializion turned on and configure their deserializer and throw errors if it sees the field it doesn't recognize. But in general, we have to abide by what's known as Postel's Law, which says that you should be strict in what you send out and liberal in what you accept. And that was a principle that helped scale the internet.
Postel was involved in a lot of the protocols like TCP that helped to scale the internet. And it's a good principle to think in terms of API design as well or having a tolerant reader. A more controversial example might be the example I just gave, which is if we have Endpoint1 and I decided that I got something wrong about Endpoint1 about the behavior, but I don't want to create a new version, I just create Endpoint1V2 as a separate endpoint. And so that's a new change. It's a change by addition, but it's an inelegant one because it means now, consumers have to understand the nuance between these two endpoints. So it increases the surface area for the same capability fundamentally of the API.
Thomas Betts: Yes, I can see that. GetProducts and GetProductsV2 and it returns a different type. And then what do you do with the results if you want to drill into it and that can quickly become a spaghetti pile of mess. The next one was multi-typing and what does that look like in an API?
Brandon Byars: Yes, so I did this one time in Mountebank and I regretted it because I don't think it's a particularly obvious or elegant solution, but I had added a field that allows you to specify some degree of latency in the response from the virtual service at just a number of milliseconds that you wait. And then somebody asked to be able to make the number of milliseconds dynamic. And so I mentioned in passing this programmable embedded API inside the REST API, there was a way of passing a JavaScript function, in another context. So I decided that was a solution that sort of fit within the spirit of Mountebank, but because I didn't want to have the GetProducts and GetProductsV2 endpoint, so I didn't want to have a Wait behavior is what it's called, and a WaitDynamic behavior at the time. I just overloaded the type of the Wait behavior.
So if you pass a number, it interprets it as milliseconds. If you pass something that can't be interpreted as a number, it expects it to be this JavaScript function that will output the number of milliseconds to wait and that works without having to add a new field. But it's a clumsy approach in retrospect because it makes building a client SDK harder. That's a unexpected behavior of the API. So in retrospect, I would've gone with a less elegant solution that increase the surface area of the API to just make it more obvious to our consumers.
Thomas Betts: The idea of having an overload makes sense when it's inside your code. I write C# mostly and I can overload a function with different parameters and specified defaults and that's intuitively easy to tell when it's inside your code. When you're getting to an API surface, that raises a level of complexity because of how we're communicating those changes, it's not as obvious. You don't necessarily know what language is going to be calling into your service and what they're able to do.
Brandon Byars: Yes, that's exactly right and that's why I mentioned it in passing because I did do it. That was one of the very first changes I made in Mountebank but regretted it and I don't think it's a robust strategy moving forward.
Thomas Betts: Yes, it's also a case of if you make all the decisions based on the best information you have at that point in time and the 500 milliseconds sounded like a good option but quickly ran into limitations. I think people can relate to that.
I know I've run into the next one myself and that's upcasting. So take a single string and oh, I actually want to handle an array of strings. How does that look in an API and do you have any advice on how to do that effectively?
Brandon Byars: Yes, upcasting is probably my favorite technique as an alternative to versioning. So the idea, the name, of upcasting is really this idea of taking something that looks like an old request and transforming it to the new interface that the code expects. And did something very similar to what you just described, had something that was a single string. It was this notion that I could shell out to a program that could augment the response that the virtual service returns, but quickly realized that that needed to be an array because people wanted to have a pipeline of middleware that other tools supported so they could have multiple programs in that list. And the way that I went about that in Mountebank was I changed the interface. So if you go to the published interface on the documentation site, it would list the array. That was the only thing that was documented.
Because this was the request processing pipeline, every request came through the same code path. So I was able to just insert one spot and that code path for all requests that said, "Check if we need to do any upcasting." And what it would do is it would go to that field and say, "Hey, is the type a string? If it is, then just wrap an array around that string." And so the rest of the code only had to care about the new interface. And so that made the implementation complexity. It reduced what having to scatter a lot of this logic all throughout the code, it was able to centralize it in one spot.
It also is really effective because you can nest upcasts. In fact, this happened in the example that we're talking about where it went from a string to an array and then, without getting too much detail, it actually needed to turn back to a string. But I needed to put an array at the outer level. And so I had to then have a second upcast that just said, "Hey, is this an array?" Turn it back to a string and is this outer thing an array or an object? And make sure it's the right type and go through the transformation to fix it if it's not.
But again, it's very simple and very deterministic because it all requests in the pipeline go through the same code pass. It'll centralize the logic and as long as you execute the upcasts in order in chronological order of when you made those changes, what would otherwise be versions, then it's a determinist output and you're accepting basically anybody who has any previous version of your API, it will still work. Even if it doesn't match what's documented as a published interface, if it matched what used to be documented, the code will transform it to the current contract itself.
And so that's a really powerful technique that balances those concerns that we talked about around obviousness, and elegance, and stability. It's a very stable approach. There still are edge cases where you can break a consumer if they're then retrieving the representation of this resource that has had its contract transformed to the upcast and that breaks some client code that they have. You can still imagine scenarios where that could happen, but it's quite stable and very elegant because it requires no additional work for the consumer to consume it.
Thomas Betts: Yes, that's a key point that you're trying to get to is minimizing the impact to the consumers. So having a version pushes the cost to them for this breaking change. But here, you're saying it is a breaking change but you are accepting the cost as the producer of the API.
Brandon Byars: Yes, and what I like so much about upcasting is that accepting the cost is centralized and easy to manage. And so whereas every consumer who used that field would've had to make that change with a new version. Only me as the producer has to make this change with an upcast and I can centralize it and it's not a lot of change. And I have all of the context around why the change happened because I'm the producer of the API so I can manage it in probably a safer way than a lot of consumers. I know where a lot of the mine fields that you might step on are during the transformation process itself.
Thomas Betts: Yes, I like the idea of having these versions. You talk about the versioning increasing the surface area of the API. It's also a matter of increasing the surface area of the code that you're maintaining. And here, by implementing that one upcast, it's in one place and it's very clear as opposed to now I've got the two endpoints, I've got double the code to maintain and how do I support that going forward? You've almost effectively deprecated the old one by assuming all of the functionality in the new one automatically.
Brandon Byars: Yes, so it's a clean technique because what you document as your published interface or contract is exactly what you would've otherwise done with a new version. It represents the new interface and the transformation code itself is very easy to manage with an upcast in my experience, at least with the upcasts I've done to date. And even when it's a complicated transformation, well that same transformation you would be asking your consumers to do were you to release a new version.
Thomas Betts: And like you said, this specifically, you changed the published specification. So you said, "I accept an array," but if someone still sent you a single string, which no longer abides by your published contract and you're like, "Oh, that's still good." And so it's no impact to them, but how do you resolve that discrepancy of, "Here's what I say works, but that's not just what I do." It's like an undocumented feature.
Brandon Byars: That's where you run some risk because now this undocumented feature, in fact a subsequent example that, hopefully we'll get to, tripped over this, those undocumented features can cause bugs. So you have to be thoughtful about that. You have to be careful. But it's part of the trade offs. We talked about architectural trade-offs and this is allowing us to have a clean interface that represents the contract we want without passing complexity to the consumers to migrate from one version to the next. So it reduces the friction of me changing the interface because I have to worry less about the cost of the consumers with it while maintaining the clean interface that I want as long as I don't run into too much risk of these hidden transformations causing bugs.
And in the case that we just talked about where it was simple type changes, I feel really confident that those don't cause bugs. The only bugs would be people round tripping the request and then getting the subsequent resource definition back into their code and doing some additional transformations in the client side. So there's broader ecosystem bugs that could happen, but then it's the same cost that the consumer would've had to do if I had released a new version. So it's not making their life any worse than a new version would.
Thomas Betts: And then you said that you just apply these in chronological order. So it's almost like a history. You have comments in there that say, "Hey, this was version zero or version one, then version two," and you can see the history of I had to do this and then I had to do that and then I had to do that. And so is your code self-documenting just for your benefit of, "Oh Yes, I remember that decision that I had to make and this is how I solved it."
Brandon Byars: Close. So I have a module called compatibility, and just in one spot in the request processing pipeline, I say Compatibility.Upcast() and I pass in the request. And then that upcast function calls a bunch of sub-functions. Every one of those sub-functions represents effectively aversion, a point in time. And so the first one might have been changed the string to an array and the second one might have been changed this outer structure into an object, whatever it is. But each of those are named appropriately and the transformation's obvious. And then the documentation, the comments and so forth around the code give you the context. And I just have the advantage of having it being a single maintainer, the advantage and disadvantage. There's other disadvantages of being a single maintainer, but the advantage is that I know all the history, so it's well contained and very easy to follow.
Thomas Betts: So that's a lot of talk about upcasting. What's the opposite of that? Downcasting?
Brandon Byars: Yes, downcasting is a little bit harder to think through. So this is taking something that looks like the new interface and making it look like the old interface. And I had to do this at a couple points in Mountebank's history and the implementation logic for this is more complex. The reason I had to do this is because of that embedded programmable API that I mentioned. So the REST API was the same, it just accepted the string. The string represented the JavaScript function that ran in a certain context. And over time, as often happens with functions where people are adding features over time as it just took on more and more parameters. And some of the parameters actually should be deprecated. So it's starting to look inelegant.
The usual solution for this in the refactoring world is you introduce a parameter object, like a config parameter, single parameter that has properties that represent all the historical parameters that were passed to the function. So I did that. The challenge is I needed the code to work for the consumers who passed in both the new interface and the old interface. And so the only thing that's documented is the new interface. It just takes a single parameter object. But what the code does on the downcast is then it secretly passes the second, third, fourth, and fifth parameters as well. And it secretly adds properties to the first parameter that's now the parameter object so that it had all of the properties of what was the previous first parameter as well. So anybody who is passing the old interface, the code has been changed so that it will still pass what effectively looks like the same information, especially with if you consider some duck typing on that first parameter because they'll have more than it used to have.
And for people who are passing the new interface where they just have a single parameter object, everything works great. If they want to inspect the function definition, they can tap into those other parameters, but they have no need to. That's just there for backwards compatibility purposes.
So that code had to be sprinkled. I couldn't centralize that code. I could centralize the transformation to an extent, but I had to call the downcast everywhere it was needed. There wasn't a single point in the request processing pipeline where I could do that. So it was a little bit harder to manage downcasting.
Thomas Betts: But that again, is your problem that you, as a maintainer, have to absorb versus having to figure out for your consumers, do this here and do that there and make all these little selective changes to the consumption of the API. You just accepted this isn't good for them. Does it make it easier for them to understand because you haven't added the complexity to the API surface?
Brandon Byars: Yes, and this one would've been awkward, for certain API consumers to embrace the change because it's not directly visible from the REST API contract. It's an embedded API inside the REST API contract. So I was particularly concerned about how to roll this change out in a way that was stable for consumers, didn't cause a lot of friction, or have them scratch their head and having to pour over documentation to understand the nature of the change. I want it to be as seamless as possible for them, while giving everybody who's adopting the API for the first time or getting started with it, what is a much cleaner interface.
Thomas Betts: I wanted to go back to… You mentioned the hidden interfaces was another landmine to worry about. Is it just a matter of you didn't provide documentation but you accept something? And is that from laziness or is it actually an intentional choice to say, "I'm not going to document this part of the API?"
Brandon Byars: Yes, it's intentional. I certainly have examples of laziness too. So I'm not trying to dismiss that as an approach, but in the cases that I wanted to at least call out, what happened was I got something wrong to begin with. And of course, when you get something wrong inside a code base, you just refactor it. But when you get something wrong in a way that is exposed to consumers, to which you don't control, it's a public API, it's harder to fix it. And this is generally where versioning kicks in that allows me a path to fix it. And then it's the consumer's problem to upgrade.
I had an example where I mentioned that one of the bits of functionality from Mountebank was shelling out to another program to transform something about the response from this virtual service. Originally, I had passed these parameters as command line arguments and it turns out that I just was not clever enough to figure out how to quote them for the shell escaping across all the different shells, primarily the Windows ones are where a lot of the complexity kicks in, especially the older cmd.exe is where you get a lot of complexity around shell quoting that isn't as portable to a lot of the posix shell based terminals.
So I got it wrong and I spent probably a full day trying to fix it. And I remember asking myself at one point, "Why am I doing this?" To just pass the arguments as environment variables problem solved. And eventually, I did that, and so I changed the interface of this programmable interface to pass in environment variables instead of command line parameters because I couldn't figure out how to pass in the command line variables the right way with the right shell escaping of quotes. And to try to strike a balance on stability and giving the new interface that I wanted, I wrote it in such a way that Mountebank was still passing the command line interfaces and if it didn't work because it broke something on a Windows shell around shell escaping, well, it never worked, so that's fine. And if it used to work for you, it should continue to work for you. Everybody else, just the new adopters just do the environment variables. They get a much more scalable solution.
But this is where those hidden mines that we talked about can trip you up because it turns out that escaping quotes on the shell wasn't the only problem. It turns out that shells also have a limit of how many characters you can pass to a command line program. And again, especially cmd.exe has the lowest limit. And so I had to end up truncating the amount of information passed in ways that actually could break previous consumers just to get over that limitation.
And it was a really interesting exercise to go through because I had to trade off what was the lesser of two evils. Should I cut a new version and force everybody to upgrade? When in fact, I had no evidence that anybody was tripping over this bug. If I truncated the number of characters into the shell, I had no evidence that was breaking anybody. And to this day, I don't have any evidence that it did. So I ended up making the change where I hid the previous interface. It's not on the documentation, it's still passed the command line parameters that shortened them in ways that could have broken somebody, that used to work in the past and no longer does.
And I left some notes in the release notes that get pushed out with every release of Mountebank, but I tried to make it instead of a pure architectural guarantee of correctness, I put on my product manager hat and say, "As a user, what's the lesser of two evils?" If I run into this bug, the path to resolution is pretty clear. Can I give them as much direction as possible if that's something they're running into in the document, the release notes and so forth? And can I do this in a way that hopefully impacts nobody, but if it does, in fact as few people as possible? And it felt like that was a path with less friction than releasing a new version that would've affected everybody in the upgrade path. But that was a really different way for me of thinking about API evolution, because I had to think about it more like a product manager than an architect.
Versioning is generally an architectural concern, but it's really part of your public interface that you release to users. It's also part of your product interface. And when you come at it with a product mentality and you think about how to minimize friction, you have a more nuanced understanding of the trade offs. And I certainly did in that case.
Thomas Betts: Yes, that got to where I wanted to wrap up with talking about how developers and architects should think about API evolution, not just from the programming problems that you have. And I like that last example. Actually, I wanted to go back to it because you had a bug and sometimes, when you have a coding bug, you're like, "Oh, I can solve this inside this function and no one will know anything about it." But sometimes, you realize the bug is only a bug because of what's being passed in as input. And the fix is you have to change the input and that in this case, changes the API. Tell me more about that product management thinking of saying, "Well, we haven't seen any evidence that our customers are using this and we think it'll be a minimal impact and it'll be an acceptable impact for them."
Brandon Byars: And there's a lot. This is where it's a judgment call. It always is anytime you're managing a product, but if you never risk upsetting some users with some feature changes as a product manager, then your product is going to be stuck in stasis. So you know have to evolve. But you also know that you want to introduce as little friction as possible because in your request for new users, you don't want to lose the users you already have. It's one of the difficult parts of product management. And so in this case, it felt like the way to walk that tight rope was to take into consideration a few facts. The feature under consideration had not been out in the wild for very long before the bug was reported. So it's not like it had seen wide adoption yet.
The first change, switching to environment variables happened pretty quick. So most people who had used it should be using the new interface. And the problem was some of those people were running into a bug because they passed large strings of text to the command line. They had no idea why this was breaking because it's just an environment variable. They don't understand that it's also passing this in the command line. That was more confusing to them than stripping that functionality out.
So I was breaking people using the new interface. I had no evidence that there was any adoption of the old interface because it had this bug. And so it was a risk/reward trade off that said, "Hey, this feels like the path of least friction for the most people that leads to the cleanest outcome, so let's go down that path." And I haven't regretted it in that instance, but it's certainly something that requires a lot of nuance.
Thomas Betts: I remember in your talk, you briefly mentioned that you are working on an article for this that'll be published, coming soon. Is that still in the works?
Brandon Byars: That is still in the works? So I had started an article last year and I put it on ice and this QCon talk that I gave in this podcast, Thomas, as a good nudge 'cause I'm hoping over the winter break to get that over the line. If I can, then I've had a first pass review with Martin Fowler who's posted some of my other work, and I'm hoping that we can get it on his blog in the new year.
Thomas Betts: All right. Well hopefully, people will be able to find that on… Is it martinfowler.com?
Brandon Byars: That's it. Yes. The one and only.
Thomas Betts: All right. Hopefully, that'll be coming out soon, early next year.
Brandon Byars: That's my hope. Yes.
Thomas Betts: Well, I want to thank you again, Brandon Byers for joining me on another episode of the InfoQ podcast.
Brandon Byars: Thank you so much, Thomas, for having me.
2022 Year in Review: ADRs, Staff Plus, Platforms, Sustainability, and Culture Design
InfoQ Software Architecture & Design Trends 2022
The InfoQ Podcast: .NET Trends Report 2022
Managing an API as a Product with Deepa Goyal
A round-up of last week’s content on InfoQ sent out every Tuesday. Join a community of over 250,000 senior developers. View an example
We protect your privacy.
You need to Register an InfoQ account or Login or login to post comments. But there’s so much more behind being registered.
Get the most out of the InfoQ experience.
Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p
Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p
Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p
A round-up of last week’s content on InfoQ sent out every Tuesday. Join a community of over 250,000 senior developers. View an example
We protect your privacy.
Real-world technical talks. No product pitches.
Practical ideas to inspire you and your team.
March 27-29, 2023. Attend in-person or online.
QCon London brings together the world’s most innovative senior software engineers across multiple domains to share their real-world implementation of emerging trends and practices.
Level-up on 15 major software and leadership topics including Modern Frontend Development and Architecture, Enhancing Developer Productivity and Experience, Remote and Hybrid Work, Debugging Production, AI/ML Trends, Data Engineering Innovations, Architecture in 2025, and more.
SAVE YOUR SPOT NOW
InfoQ.com and all content copyright © 2006-2023 C4Media Inc.
Privacy Notice, Terms And Conditions, Cookie Policy
- Published in Uncategorized
Global eClinical solutions Market Report 2022 to 2028: Rising … – Business Wire
DUBLIN–(BUSINESS WIRE)–The “Global eClinical solutions Market Size, Share & Industry Trends Analysis Report By Delivery Mode, By Product, By Clinical Trials Phase (Phase III, Phase II, Phase IV and Phase I), By End User, By Regional Outlook and Forecast, 2022 – 2028” report has been added to ResearchAndMarkets.com’s offering.
The Global eClinical solutions Market size is expected to reach $20.1 billion by 2028, rising at a market growth of 13.6% CAGR during the forecast period.
With professional data services as well as the elluminate Clinical Data Cloud, eClinical Solutions assists life sciences enterprises all over the world in accelerating clinical development projects.
The elluminate platform and digital data services provide clients with self-service access to all of their data from a single, central location, as well as comprehensive analytics that aid in the faster and more informed decision-making process for businesses.
For the effective administration of data for clinical trials, a variety of eClinical solutions are employed, that includes electronic data capture & clinical data management systems, randomization and trial supply management, clinical trial management systems, and others. It aids in the efficient integration and management of data produced during clinical studies.
It provides a suite of tools to efficiently organize, manage, track, and create insights for clinical research portfolios. It integrates contact management sites and teams, a calendar and monitoring system, and document management.
As a result, it creates approved clinical research outcomes as well as compliant submissions; stores and regulates data entry; authenticates the reliability and integrity of the data, and makes it possible to improve the patient experiences by accelerating drug development.
COVID-19 Impact Analysis
The market for eClinical solutions is expected to benefit from the COVID-19 pandemic. To expand hospital capacity for patients diagnosed with COVID-19, a significant number of clinics and hospitals around the world underwent restructuring. A potential backlog in non-essential procedures developed as a result of the sharp increase in COVID-19 cases. The lockdown caused delays in the production and delivery of critical medical supplies.
Market Growth Factors
Operational Expenditures and Regulatory Needs are rising
Customized or gene-based disease management is becoming more popular in the field of medical research and innovative drug treatments. In comparison to presently available alternative therapies or medications, government reimbursement organizations, commercial insurers, and payers frequently demand novel drugs that have a better therapeutic value and greater efficacy.
Additionally, by controlling the standard pricing of innovative pharmaceuticals, these payers are reducing manufacturing businesses’ profit margins. The eClinical solutions market is concentrating on the development and marketing of software solutions that help speed up and efficiently complete clinical studies.
Rising Adoption of Software Solutions in Clinical Trails
Due to the growing amount of data produced throughout clinical development processes, there is an increased need for recording and analyzing clinical data, which has led to a growth in the use of eClinical solutions in clinical trials.
Furthermore, eClinical technologies improve site performance, clinical trial efficiency, and cost optimization by removing redundant data entry. Also, it is noted that the rapid uptake of eClinical solutions, such as RTSM, combined with effective trial drug supply management, is expected to motivate key companies to increase their investment for product innovations.
Market Restraining Factors
High Cost of Implementation
In order to efficiently manage the clinical research data and information throughout the research lifecycle, researchers might use eClinical solutions. Many integrated eClinical solutions (like CTMS and CDMS) offer clinical researchers end-to-end solutions for all phases of clinical trial administration. These software solutions are, quite costly and charged at a premium rate. With additional costs for technical support cloud-based systems, the installation, and maintenance of eClinical solutions.
Scope of the Study
Market Segments Covered in the Report:
By Delivery Mode
By Product
By Clinical Trials Phase
By End User
By Geography
Key Market Players
List of Companies Profiled in the Report:
For more information about this report visit https://www.researchandmarkets.com/r/krwe8
ResearchAndMarkets.com
Laura Wood, Senior Press Manager
press@researchandmarkets.com
For E.S.T Office Hours Call 1-917-300-0470
For U.S./ CAN Toll Free Call 1-800-526-8630
For GMT Office Hours Call +353-1-416-8900
ResearchAndMarkets.com
Laura Wood, Senior Press Manager
press@researchandmarkets.com
For E.S.T Office Hours Call 1-917-300-0470
For U.S./ CAN Toll Free Call 1-800-526-8630
For GMT Office Hours Call +353-1-416-8900
- Published in Uncategorized
Quality Management System Software Market is poised to grow at a … – Digital Journal
Hi, what are you looking for?
By
Published
Quality Management System Software (QMS software) is a type of software application designed to help organizations improve their overall quality performance. It enables organizations to track, manage, and report on quality-related activities and processes, such as customer complaints, corrective and preventive actions, product and process audits, document control, and quality training. QMS software typically includes features such as document management, corrective and preventive action tracking, and non-conformance tracking. It can often be integrated with other enterprise software systems, such as ERP, MRP, and CRM, to provide a comprehensive view of quality management across the organization.
The Quality Management System Software Market research report provides all the information related to the industry. It gives the markets outlook by giving authentic data to its client which helps to make essential decisions. It gives an overview of the market which includes its definition, applications and developments, and manufacturing technology. This Quality Management System Software market research report tracks all the recent developments and innovations in the market. It gives the data regarding the obstacles while establishing the business and guides to overcome the upcoming challenges and obstacles.
The global Quality Management System Software Market is expected to grow at significant CAGR of 10% during the forecasting Period (2023 to 2030).
Get the PDF Sample Copy (Including FULL TOC, Graphs, and Tables) of this report @:
https://a2zmarketresearch.com/sample-request
Some of the Top companies Influencing this Market include:
SAP, ETQ, PTC, Oracle, AssurX, Veeva, Siemens, Intelex, Sparta, Pilgrim, MasterControl, ComplianceQuest, Cority, TIP Technologies
Competitive landscape:
This Quality Management System Software research report throws light on the major market players thriving in the market; it tracks their business strategies, financial status, and upcoming products.
Market Scenario:
Firstly, this Quality Management System Software research report introduces the market by providing an overview that includes definitions, applications, product launches, developments, challenges, and regions. The market is forecasted to reveal strong development by driven consumption in various markets. An analysis of the current market designs and other basic characteristics is provided in the Quality Management System Software report.
Segmentation Analysis of the market
The market is segmented based on the type, product, end users, raw materials, etc. the segmentation helps to deliver a precise explanation of the market
Market Segmentation: By Type
Cloud-Based
On-Premises
Market Segmentation: By Application
Large Enterprises (1000+Users)
Medium-Sized Enterprise (499-1000 Users)
Small Enterprises (1-499Users)
Regional Coverage:
The region-wise coverage of the market is mentioned in the report, mainly focusing on the regions:
North America
South America
Asia and Pacific region
Middle East and Africa
Europe
An assessment of the market attractiveness about the competition that new players and products are likely to present to older ones has been provided in the publication. The research report also mentions the innovations, new developments, marketing strategies, branding techniques, and products of the key participants in the global Quality Management System Software market. To present a clear vision of the market the competitive landscape has been thoroughly analyzed utilizing the value chain analysis. The opportunities and threats present in the future for the key market players have also been emphasized in the publication.
For Any Query or Customization:
https://a2zmarketresearch.com/ask-for-customization
This report aims to provide:
A qualitative and quantitative analysis of the current trends, dynamics, and estimations from 2022 to 2029.
The analysis tools such as SWOT analysis and Porter’s five force analysis are utilized, which explain the potency of the buyers and suppliers to make profit-oriented decisions and strengthen their business.
The in-depth market segmentation analysis helps identify the prevailing market opportunities.
In the end, this Quality Management System Software report helps to save you time and money by delivering unbiased information under one roof.
Table of Contents
Global Quality Management System Software Market Research Report 2022 – 2029
Chapter 1 Quality Management System Software Market Overview
Chapter 2 Global Economic Impact on Industry
Chapter 3 Global Market Competition by Manufacturers
Chapter 4 Global Production, Revenue (Value) by Region
Chapter 5 Global Supply (Production), Consumption, Export, Import by Regions
Chapter 6 Global Production, Revenue (Value), Price Trend by Type
Chapter 7 Global Market Analysis by Application
Chapter 8 Manufacturing Cost Analysis
Chapter 9 Industrial Chain, Sourcing Strategy and Downstream Buyers
Chapter 10 Marketing Strategy Analysis, Distributors/Traders
Chapter 11 Market Effect Factors Analysis
Chapter 12 Global Quality Management System Software Market Forecast
Buy Exclusive Report @:
https://a2zmarketresearch.com/checkout
About A2Z Market Research:
The A2Z Market Research library provides syndication reports from market researchers around the world. Ready-to-buy syndication Market research studies will help you find the most relevant business intelligence.
Our Research Analyst Provides business insights and market research reports for large and small businesses.
The company helps clients build business policies and grow in that market area. A2Z Market Research is not only interested in industry reports dealing with telecommunications, healthcare, pharmaceuticals, financial services, energy, technology, real estate, logistics, F & B, media, etc. but also your company data, country profiles, trends, information and analysis on the sector of your interest.
Contact Us:
Roger Smith
1887 WHITNEY MESA DR HENDERSON, NV 89014
[email protected]
+1 775 237 4157
COMTEX_422461828/2769/2023-01-11T01:53:20
French luxury group LVMH announced a leadership shuffle naming a new CEO at flagship brand Louis Vuitton.
State backed hackers with the goal of possibly gaining scientific intel on the US nuclear manufacturing process.
Greed is not good. It’s expensive, and it’s been driving social failure since that stupid expression was coined.
A South Korean-based solar panel manufacturer will soon expand its footprint in northwest Georgia, adding 2,500 jobs.
COPYRIGHT © 1998 – 2023 DIGITAL JOURNAL INC. Sitemaps: XML / News . Digital Journal is not responsible for the content of external sites. Read more about our external linking.
- Published in Uncategorized
EaseText launched Audio to Text Converter to easily transcribe … – Norman Transcript
Sunny to partly cloudy. High 71F. Winds SSW at 10 to 20 mph..
Mostly clear early then partly cloudy and windy after midnight. Low 36F. Winds WNW at 20 to 30 mph.
Updated: January 11, 2023 @ 11:38 am
NEW YORK, N.Y., Jan. 10, 2023 (SEND2PRESS NEWSWIRE) — It is with great enthusiasm that EaseText, the world-leading creativity software company, announces the launch of EaseText Audio to Text Converter, the latest iteration of their award-winning speech to text transcription software. With EaseText Audio to Text Converter, users can convert and transcribe audio to text offline on computer with ease.
EaseText Audio to Text Converter is an offline AI-based automatic audio transcription software. It uses artificial intelligence technology to transcribe & convert audio to text in real-time. The transcription can run offline on your computer to keep your data safe and secure. It supports a wide range of languages and offers a range of customization features, including the ability to transcribe multiple speakers and generate summaries of meetings and conversations.
“Unlike using other online transcription software, users need to upload the audio file to a web server in order to transcribe the audio to text,” said Vincent Song, the CEO of EaseText. “Our Audio to Text Converter is an offline tool that provides high quality and accuracy. The whole converting process is done locally on the computer even without internet. This will keep the private data protected and secure.”
Step-by-step tutorial: https://www.easetext.com/tutorial/how-to-transcribe-audio-to-text-freely.html
OTHER FEATURES INCLUDE:
1 — Convert audio file to text in high quality
EaseText Audio to Text Converter can convert audio to text very fast, with high quality and high accuracy. Bath file converting is also supported.
2 — Transcribe speech to text in real time
EaseText Audio to Text Converter is a renowned automatic transcription tool that uses artificial intelligence technology to transcribe audio to text in real-time.
3 — Record Meeting and take notes smoothly
With EaseText, users can easily record meeting & take notes from Zoom, Microsoft Teams, Google Meet, and Cisco Webex, etc. It is a highly efficient tool both in terms of time and cost.
4 — Support saving text transcript as PDF, HTML, TXT and WORD
After converted & transcribed audio to text, users can export and save the content as a document file such as PDF, HTML, TXT or Microsoft Word.
5 — Support 24 languages including English, Spanish, Dutch, Italian, Chinese, etc.
Price and Availability:
For personal 1 computer usage, it is available at $2.95/month. You also can buy the Family edition that $4.95/month for 3 computers.
Learn More:
https://www.easetext.com/tutorial/how-to-transcribe-audio-to-text-freely.html
About EaseText:
EaseText Software is a leading software development company providing data management software solutions. Founded in 2012, EaseText has been an award-winning developer, especially in the image, audio, video, PDF and text converting field.
More information: https://www.easetext.com
Facebook: https://www.facebook.com/easetext
Twitter: https://twitter.com/ease_text
YouTube: https://www.youtube.com/@easetext
###
UPDATED 1.11.23 8:34 a.m. PST
NEWS SOURCE: EaseText Software
This press release was issued on behalf of the news source (EaseText Software) who is solely responsibile for its accuracy, by Send2Press® Newswire. Information is believed accurate but not guaranteed. Story ID: 87771 APDF-R8.7
© 2023 Send2Press®, a press release and e-marketing service of NEOTROPE®, Calif., USA.
To view the original version, visit: https://www.send2press.com/wire/easetext-launched-audio-to-text-converter-to-easily-transcribe-audio-to-text-offline-on-pc/
Disclaimer: This press release content was not created by the Associated Press (AP).
Copyright 2023 Send2Press Newswire
Sorry, there are no recent results for popular commented articles.
Sign up now to get our FREE breaking news coverage delivered right to your inbox.
First Amendment: Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
- Published in Uncategorized
What is the difference between Contraсt Management and CLM – JD Supra
Contract Management vs Contract Lifecycle management… They sound so similar that people easily mix them up, like Slovakia and Slovenia; conscious and conscience; apples and oranges (on second thought, maybe not that last one). But while contract management and contract lifecycle management seem synonymous at first glance, they’re not interchangeable.
If you’re looking for ways to improve your organization’s legal workflows, it’s important to know the difference. Otherwise, you risk being stuck with old-fashioned practices that won’t be of much help. That’s why we’re here. We’ll take you through some of the ins and outs of each so that you’re as familiar with them as the back of your hand.
Definitions
To choose the best possible approach to managing your document workflow, let’s take a peek at their definitions. This will help you choose the best option to control costs, oversee payments, monitor revenue, improve productivity, and reduce error.
Contract Management
Contract management is the process of managing contracts from their creation, through their execution by the chosen party, and to the eventual termination of the contract. We can say that contract management refers to the large scope of processes that encompass all contract-related operations. It can be manual, automated, or hybrid, where some processes are done manually and some are automated.
Contract Lifecycle Management
CLM is a forward-thinking approach to managing contracts. Contract operations are separated into defined stages, and the scope of actions needed at each stage is streamlined to achieve max efficiency. This approach often requires the use of technology to tailor actions across each stage of the contract lifecycle.
Contract Administration
Although this term is sometimes used to describe the same process, in reality, it’s just a part of contract management. Contract administration relates to everything you do before a contract is signed and executed.
Drawbacks of Old-School Contract Management
There are three approaches to contract management:
· Manual (all processes are done manually)
· Hybrid (some processes are manual and some are automated)
· Automated (the whole process is automated)
Doing something manually is an old-fashioned approach, whether you’re doing it all manually or just some. This could be as simple as having a mountain of documents on your desk, and with two minutes remaining before a critical meeting, you have to rapidly sift through them so you can find it.
Automation speeds up processes and eliminates unnecessary involvement of other people. But when we’re talking about contract management, there are other drawbacks aside from speed.
Contract management helps control funds, reduce risks, and stay on top of performance. However, there are many spots where bottlenecks can form, causing spikes in inefficiency.
First off, managing contracts manually may result in the loss of valuable information. If your internal teams don’t know the terms of contracts by heart (which is rarely the case) and can’t quickly check them, how can you be sure that you’re meeting the obligations they contain? In addition, information can’t be quickly reused for new contracts, forcing you to spend a lot of time entering data you already had.
How can you find the data you need among numerous documents if you put the contract away once it’s executed?
A lack of knowledge causes poor planning, misunderstandings, missed deadlines, unnecessary costs, and uncalled-for contract prolongation.
Here’s a quick math problem: If a lawyer’s average hourly rate is $90, how much money does your company lose annually on mere copying and pasting data across legal documents?
On top of that, if contract templates aren’t unified, they might not satisfy government requirements and company policies.
Here are some drawbacks to tracking contracts only from point A to point B:
· Lack of visibility. There are a lot of things you may need information about: dates, terms, costs, addendums, deliverables, party data, counterparty data, and due diligence. And that’s just the tip of the iceberg. With so many elements, it’s easy to lose track and discover that critical info got lost in the shuffle, making it excruciating to sift through and find.
Even if you do only some of these processes manually, it’s already outdated. Most common non-automated approaches are used for document exchange and storage. Let’s consider the drawbacks to old-school ways.
Where’s the Analytics?
When contracts are managed manually, there’s practically no performance analysis. And if there’s no analytics, there’s no chance to improve performance or find opportunities to increase the bottom line. If anything, you run the risk of burning money without even realizing it.
That’s because if data isn’t being pulled automatically, it’s hard to quickly check how well a contract’s performing. By having easy access to executed contracts, you can avoid wasting time searching for and collecting information that you need to analyze. Instead, all your time and effort can focus on monitoring and boosting performance.
Here’s a simple rule to live and work by: If you don’t control the whole lifecycle, you don’t control the workflow.
Document Exchange: Spreadsheets, spreadsheets everywhere!
In manual contract management, companies use programs such as MS Word to create a contract. Document exchange is often made using spreadsheets and emails. This may seem like the first step to increasing visibility and taking control over your contracts, but it’s not as good as it may sound.
The first problem with sharing contracts via spreadsheets and emails is the lack of security. You might send a letter or give access to the wrong person, or your email could be hacked or compromised by a phishing letter. If cybercriminals break into your organization’s system, they can communicate as if they were an employee, get the information they need, request money transfers to their account, or change data in your documents. Only one phishing message is enough to let them spy on your communication. On average, companies receive almost 1,200 such letters each month.
Inefficiency is another problem of manual exchange. It appears due to the following reasons:
Keeping track of contracts, diligent reviews, updates, and data accuracy are crucial for business efficiency. The problem is spreadsheets are not designed to support the whole lifecycle of a contract, nor do they create a comfortable environment for real-time contract redlining and approval.
Storage
Without centralized storage, documents are stored across numerous folders on different computers or even on paper in file cabinets. Having tons of contracts in different forms (paper and electronic) scattered in different locations makes it hard, if not outright impossible, to track, manage, and analyze files. Decentralized ways of storing contracts create several problems:
In addition to colleagues, your clients could suffer as well. It’s difficult to provide proper customer service when colleagues aren’t able to find information quickly and can’t be sure they’re using correct data.
Contract Management vs CLM
CLM uses AI-based technology to control the entire lifecycle of a company’s contracts. Implementing contract management software is the fastest way to improve contract management and remove roadblocks that delay contract authoring, negotiations, approvals, and renewals. Thus contract execution is faster, and real-time visibility allows for easier contract management while decreasing risks and lowering costs.
CLM processes often address automated document drafting, streamlined contract redlining in real time, transparent approval flows, and analytics. Here are some of the ways a CLM system helps manage contracts:
Not to mention, contract lifecycle management saves a huge amount of time. Fast searches, no manual tracking of important dates, no manual data input, and self-service document creation allow you to forget about routine. And as a bonus, happier employees
Final Words
If you want to gain control over your contracts, a manual approach to contract management is not the best option. Even after the contract is terminated, nothing’s over. It still holds valuable information that can be analyzed.
Contract lifecycle management facilitates workflows and streamlines document turnaround by providing centralized storage and easy access to information. Implementing a CLM system is a strategic step that can help you manage your contracts better and gain the best possible value.
View original text here.
See more »
Refine your interests »
Back to Top
Explore 2022 Readers’ Choice Awards
Copyright © var today = new Date(); var yyyy = today.getFullYear();document.write(yyyy + ” “); JD Supra, LLC
- Published in Uncategorized
Benefits of Utilizing CAD for New Home Construction – Software Advice
For free software advice, call us now! 855-998-8505
By: Sam Willis – Guest Contributor on July 12, 2022
Although construction is a hands-on type of profession, the most successful contractors leverage technology to the fullest. Whether it be accounting software to help keep financials in order, store locator software to route local customers to business locations, or inventory management software to ensure just-in-time delivery of building supplies, there is no shortage of programs that can give builders an edge in the current digital business environment.
One of the most essential programs for home builders is computer-aided design (CAD) software. Quality CAD software allows contractors to plan and develop every aspect of the home-building process, leading to elite project accuracy and improved client satisfaction.
Keep reading to find out six key benefits of using CAD in new home construction.
Freehand renderings of blueprints cannot match CAD software in terms of accuracy and image quality. CAD software provides engineers and architects with a plethora of tools to design a digital rendering of the home just as they imagined.
The right materials knowledge, such as fiber cement siding specifications or metal stud ceiling framing details, combined with the appropriate mathematical equations allows builders to leverage the software to come up with design concepts that are otherwise difficult to put down on paper.
In addition, a final CAD rendering is more legible and contains fewer errors than freehand efforts, resulting in better image quality and, ultimately, a more accurately constructed home.
CAD drawings in Procore (Source)
There is no shortage of parties who will need to access plans and blueprints when constructing a new home: architects, engineers, project managers, contractors, or even general construction workers.
Without CAD software, updating and distributing these important documents to all parties can be a major chore, often requiring daily construction meetings to make sure that everyone is on the same page. Not only does this keep workers out of the field and slow the construction process, but it prevents architects and engineers from getting started on their next build.
With CAD software, drawings and plans are stored in the cloud. This allows all team members to access documents from their personal devices, making changes as necessary and ensuring that everyone is alerted in real time for optimal project cohesion.
Although there have been incredible advances in building materials innovation in recent years, it can be difficult to tell how materials will perform in action until a home is actually built.
With CAD software, builders can avoid this pitfall by using computer simulations to quickly swap and test different types of building materials.
For example, a single click can switch the building’s framing materials from insulated metal panels to ICF, or roofing materials from asphalt shingles to composite slate tiles.
Not only does this give designers a better visual for how the different materials interact aesthetically, but it allows them to more accurately predict how material changes will impact design specifications.
Rework ends up costing construction firms an average of five percent of their contract value[1]. This equates to roughly $50,000 of revenue lost for every $1 million built.
However, with the right planning and communication, rework costs can approach zero.
In addition to more accurate digital renderings and document sharing through the cloud, CAD software allows architects to model electricity, plumbing, and other home elements, helping create a more comprehensive design with fewer surprises as construction of a new home progresses.
Just as CAD software can help construction firms avoid costly rework scenarios, it can improve client satisfaction by yielding more accurate project estimates.
Due to greater accuracy and specificity in the design process, firms can pinpoint the number of labor hours needed, predict material quantities with certainty, and better utilize tools and machinery to help arrive at the most accurate, competitive cost possible.
As so much of the construction industry is going digital in 2022, it’s beneficial to integrate CAD software with other construction technology.
Specifically, as enterprise resource planning (ERP) software is central to most construction operations, CAD software can be a critical cog in expediting the time it takes to transition raw materials into a move-in ready home.
One truly innovative way it can facilitate this process is through computer-aided manufacturing (CAM). Using CAM, the CAD model can send a production code for the manufacture of specific materials.
This can be especially beneficial in the fabrication of custom homes where stock supplies may not be sufficient for completing the project as designed.
Of all of the software programs at home builders’ disposal in 2022, arguably none is quite as beneficial as CAD software.
If your contracting firm is unhappy with how its designs are coming to life, invest in CAD software today to help your profitability soar.
To get started bringing your designs to life and creating more accurate designs, faster, get started comparing and learning more about our collection of takeoff and CAD solutions.
Does your construction firm need features beyond CAD? Explore other tools and solutions that can help with all your construction needs.
Note: The application selected in this article is an example to show a feature in context and is not intended as an endorsement or recommendation. It has been obtained from sources believed to be reliable at the time of publication.
Sources
1. The Cost of Rework in Construction and How to Prevent It, eSUB, Inc.
10 Best Construction Project Management Software
The 9 Best Construction Software To Help Your Business Thrive in 2022
A Guide to Construction Software Pricing Models
© 2006-2023 Software Advice, Inc. TermsPrivacy PolicyCommunity GuidelinesGeneral Vendor TermsGDM Content PolicyGDM Content Policy FAQs
- Published in Uncategorized
Guidance: MCC Guidance to Accountable Entities on Technical … – Millennium Challenge Corporation
Guidance
February 1, 2022
View as PDF
As used herein, the following terms shall have the following meanings:
In accordance with the relevant grant agreements by which MCC provides funds, MCC has the right to review and approve (through a response of No-Objection) or disapprove (through a response of Objection) a wide range of documents and administrative actions proposed by Accountable Entities (AEs).
This guidance seeks to provide AEs with (1) information related to the MCC Technical Review and No-Objection processes, and (2) best practices on the establishment of internal AE processes relating to MCC document reviews. This guidance is intended to help AEs efficiently and effectively manage their internal review process, which can lead to submission of high-quality requests that meet all MCC requirements and can receive timely MCC Feedback and/or No-Objections.
No-Objections are a core component of MCC’s oversight model. Over the course of a compact, AEs typically submit hundreds of requests for No-Objection. Having clearly established processes and procedures to facilitate No-Objection requests is thus critical for program success.
The purpose of MCC’s No-Objection is to ensure that requests submitted by the AE comply with MCC’s policies, standards, and practices and the relevant legal agreements, as part of the agency’s stewardship of U.S. taxpayer dollars. MCC’s No-Objection assures the AE and the partner country that MCC will allow MCC Funds to be used for the proposed action and/or that a Government Expenditure is expected to fulfil the government’s obligations to MCC. No-Objection reviews allow MCC to oversee what is being proposed and how it will be accomplished or implemented, before permitting MCC Funds to be used. This process is critical, as items that move forward without receiving a required No-Objection could be subject to refund by the AE or partner country government or result in a Government Expenditure not being counted toward the government’s obligations to MCC.
Annex 1 identifies the common documents and decisions that require MCC No-Objection. Additional documents and requests for which MCC will provide its No-Objection should be identified and discussed between MCC and the AE on an ongoing basis throughout program implementation. Further, MCC has the right to Opt-in to provide its No-Objection on any document or decision it deems critical to overall program success. MCC may, at its discretion, also choose to Opt-out of No-Objection reviews. In cases where MCC decides to Opt-in or Opt-out of a review, the Country Team Leadership will provide written notification to the AE.
Requests for No-Objection must be submitted to MCC via the Resident Country Mission (RCM), following the standard process established between MCC and the AE (see Section III for additional information). 2 Unless otherwise agreed by MCC in writing, all No-Objection requests should be submitted in English.
Following submission, MCC’s internal review involves a process with many stakeholders—it is not only one person within MCC who assesses a request for No-Objection.
MCC reviews for No-Objection will primarily focus on compliance with MCC requirements, and MCC will only object to a document if the assessment identifies Fatal Flaws.
Fatal Flaws include, without limitation, the following:
For additional details on how these are applied, the AE should consult the MCC Country Team.
When MCC objects, the specific Fatal Flaw(s), and suggested remedies, will be communicated to the AE in writing by the RCM.
Once MCC’s internal review process is complete, the RCM will respond to the AE with either a No-Objection or an Objection, following established procedures as outlined in Section III below. MCC’s response to a request for No-Objection will always be in English, though MCC may include additional attachments in other languages, where appropriate.
If MCC provides a No-Objection, the AE is authorized to move forward with the request. However, if MCC objects, MCC Funds cannot be used for the request. MCC’s Objection would likewise mean that a Government Expenditure used to implement the request would not fulfil the government’s obligations to MCC. Following an Objection, the AE will typically revise the request and resubmit it for No-Objection.
In some cases, MCC may provide its No-Objection, but also provide Feedback on issues that could help improve the document or request but are not considered Fatal Flaws. In these cases, the AE may choose whether to address or respond to the Feedback in the final version. Edits to address MCC Feedback are the only substantive changes an AE is authorized to make after MCC provides its No-Objection. If other substantive changes are introduced after MCC provides its No-Objection, the AE should resubmit the request for No-Objection.
The AE should always submit the final version 5 of the document to MCC, including incorporation of any Feedback. If the final document or decision is materially different from what MCC provided a No-Objection to, MCC could withhold funding, and might even demand refunds of any amounts spent for purposes other than those approved by MCC.
The MCC Country Team Leadership will work with the AE to define typical expected response times for No-Objections. 6 However, the amount of time required for the RCM to respond to a specific request for No-Objection will vary based on the type of request, level of complexity and whether any MCC consultants will be involved in the review process.
Particularly complex requests may require additional processing time and multiple submissions (in addition to Technical Reviews). In cases where the AE submissions have significant deficiencies and/or require additional coordination within MCC, the response time may be longer. AEs should also be aware that if they submit several requests for No-Objection in a short timeframe MCC may require more time than usual to process all the requests. In cases where MCC requires a longer turnaround time than normal, the RCM will alert the AE as early as possible.
To help promote more efficient No-Objection processes, one or more Technical Reviews with
MCC are strongly recommended. This can help ensure that documents are in an acceptable state before they are submitted for No-Objection.
Technical Reviews provide an opportunity to identify and address significant issues that may be Fatal Flaws. Technical Reviews allow the MCC Country Team and AE counterparts to identify and discuss Feedback, varying technical approaches, and professional differences. They also allow the MCC Country Team to identify and recommend changes related to grammar or stylistic issues. In cases where MCC and AE staff disagree, each side should justify their position (based on previous experience, global standards, etc.) such that MCC can determine the way forward.
As discussed in Section III below, MCC and the AE should agree on whether, or in what circumstances, Technical Reviews are required, 7 and the specific protocols for Technical Review submissions. When documents are submitted to MCC for Technical Review, they will be circulated to all MCC staff who will have a role in the No-Objection process.
The timeline for Technical Reviews may vary widely based on the level of complexity of the document(s) and whether any Informal Reviews are completed before the document is submitted for Technical Review. It is important, however, for MCC and the AE to agree up front on an appropriate timeline for a given Technical Review.
At the end of the Technical Review, MCC provides Feedback; an Objection or No-Objection is not issued.
In certain limited circumstances it may be possible for MCC to undertake expedited reviews; however, this is expected to be uncommon and based on a specific, exigent and justified need. In cases where the AE expects to request an expedited review, the AE should consult the RCM as early as possible to determine if it will be possible and if so, to agree on an appropriate review period.
There are many technical documents—those that will ultimately be submitted for No-Objection and those that will not—that AE staff work on together with their MCC counterparts. Some documents, such as consultant deliverables that do not require No-Objection, may go through an Informal Review by MCC but not require any subsequent action/submission. For other documents that do require No-Objection, an Informal Review can precede a Technical Review and/or submission for No-Objection. In cases where individuals undertake Informal Reviews, these can be performed on an informal basis, between MCC and AE counterparts, and do not need to follow standard No-Objection or Technical Review processes, as established through the procedures outlined in this document.
AE staff are encouraged to discuss the substance of upcoming requests directly with their MCC counterparts during the drafting process, and before the item is ready for Technical Review or submission for No-Objection. When documents are shared with technical counterparts for Informal Review, they may be shared with others on the MCC Country Team, but there are no standard requirements or procedures that govern this.
Close and regular communication between MCC and AE counterparts is critical for effective program operations. Close coordination throughout the Informal Review, Technical Review and No-Objection processes can lead to a more rapid clearance process and minimize iterations between MCC and the AE.
AE staff should collaborate closely with their technical counterparts in MCC, discussing upcoming requests and addressing any key questions. This provides an opportunity for MCC staff to share best practices from experience in other countries. It is also an opportunity for AE staff to confirm that the right AE staff are involved in the internal AE review process, and to determine up front what supporting documents may be required for a given No-Objection.
In planning for upcoming reviews, different types of requests could warrant different types of collaboration between counterparts. MCC and AE staff should employ various types of collaboration, including written exchanges, document reviews, phone calls about specific issues, collaborative work sessions to review and jointly edit documents, etc., as appropriate for the specific request.
Following a Technical Review or Objection, and where practical, AEs should “track changes” in documents and submit both clean and tracked changes versions for No-Objection. This will allow MCC to quickly identify what has changed and facilitate a faster and more efficient No-Objection process.
As mentioned above, No-Objections are a critical component of program implementation processes, and delays with the No-Objection process can lead to overall program delays. To help promote success, AEs must work with MCC early in the program to establish protocols for AEs to submit and MCC to respond to requests for Technical Review and No-Objection. This should, at a minimum, include the following elements:
For an example MCC Country Team and AE protocol for managing Technical Review and No-Objection processes, please see Annex 2.
To help inform both AE and MCC planning processes, the AE is encouraged to maintain a tracker or other tool which can provide a summary overview of the items that are expected to be submitted for No-Objection over a given time. Noting the limited bandwidth of individuals and teams, AEs are encouraged to develop an internal prioritization process, whereby expected upcoming submissions for No-Objection are reviewed, prioritized, and submitted in accordance with the established work plans. This can help the AE ensure that items on the critical path are not delayed while other, lower priority items move forward.
Clear communication with MCC, at both the technical and management levels, can help promote appropriate planning on both sides. This is especially critical in cases of time-sensitive, large, or critical documents, requests that may require input from MCC consultants, 8 and/or items that also have to go to the AE’s Board.
Please refer to Annex 3 for an example tracking tool. AEs are also encouraged to incorporate submissions for No-Objection into its master workplan.
For reviews which are undertaken through the MCC Management Information System (MCC MIS), responses are automatically transmitted to the system users. These users are typically the financial and/or procurement leads in the AE, though it is recommended that the appropriate AE executives also have system access and receive notifications. Whether through the system or otherwise, the AE and MCC should establish procedures to ensure that AE leadership is informed when decisions are taken on these requests.
Many items listed in Annex 1, as well as program-specific items that MCC reviews, are typically deliverables prepared by AE contractors, consultants, grantees, or partners (i.e., design documents, resettlement action plans, environmental and social impact assessments, etc.). To ensure that the AE is able to comply with any contractual timelines, the AE should review all contractual deliverables with MCC during the preparation of terms of reference and prior to contract signatures to identify those that will require MCC No-Objection. AEs should then work with MCC to ensure that all contracts, grant agreements, etc. provide sufficient time for MCC to complete the Technical Review and/or No-Objection reviews and the AE to review and consider MCC’s comments before responding to the contractor, consultant, grantee or partner.
Note that although MCC will make its best effort to identify all documents MCC will need to provide No-Objection on during the planning phase, MCC may still Opt-in to document reviews at a later date. In these cases, the AE should promptly communicate with the consultant if there is an expected need to amend contractual timelines or if there are any expected delays in deliverable finalization.
Contract and grant amendments are the subject of many No-Objection requests and often result in significant discussion between the AE and MCC. In accordance with the MCC Program Procurement Guidelines (PPG) and Program Grant Guidelines (PGG), an MCC No-Objection is typically only required for contracts and amendments over certain thresholds.
For contract and grant amendments that require MCC No-Objection, prior amendments and the original contract or grant agreement are often relevant supporting documents that MCC needs to assess the request; this is particularly true in cases where those contracts, grant agreements, or amendments were not previously submitted to MCC for No-Objection. MCC and the AE should establish protocols for how the AE will transmit copies of the original signed contracts/grant agreements and any earlier amendments that preceded the amendment being sent for MCC No-Objection. 10
To facilitate the review process, MCC also recommends that AEs include a cover sheet/justification with No-Objection request for contract/grant amendments. For a template cover sheet for contract amendments, which has been used successfully in some MCC countries, see the Contract Amendment Authorization Form within the MCC Procurement Toolkit for MCA Entities. 11
Contractual scope changes and grant program description changes may also introduce additional issues that MCC and AE staff must carefully consider. For instance, what is initially viewed as a simple change may have implications for expected program outcomes more broadly that will require careful analysis of the costs, benefits, risk, purpose, potential delays, change in economic rates of return (ERRs), etc. For a framework to help teams think through potential scope changes please refer to the AE’s Contract Administration Manual (CAM), Contract Management Manual (CMM), Contract Administration and Management Manual (CAMM), Grants Operational Manual (GOM), Leverage Grant Facilities (LFG) Operational Manual and Partnership Navigator, and/or change management documents, as applicable.
Changes or modifications that impact a project or activity’s scope, cost, ERR, and/or number of beneficiaries may require additional MCC review prior to MCC issuing a response to a request for No-Objection. This may take longer than the normal review period, and these types of requests may have a higher likelihood of not receiving approval. In cases where the AE expects to request this type of modification, the AE should provide a rationale and consult the RCM as early as possible to determine how best to proceed.
For proposed budget reallocations submitted through Schedule A in the Quarterly Disbursement Request Package (QDRP), the AE must submit a budget reallocation request. 12 For other types of program modifications, AEs should consult the MCC Country Team Leadership to determine what specific documentation may be required.
To help facilitate a smooth No-Objection process, the AE is encouraged to establish its own internal protocols for developing and submitting requests for Technical Review and No-Objection.
AEs are encouraged to establish clear responsibilities for monitoring all No-Objection requirements, ensuring that they are submitted in accordance with the work plans, and tracking internal activity. Note that this should be monitored at multiple levels: technical leads monitor No-Objections related to their specific areas of responsibility and higher levels of management monitor the overall No-Objection process for the AE.
Noting the volume of No-Objection requests that are typically submitted over the course of a program, many AEs have found that having a single No-Objection focal point who is responsible for submitting and receiving responses to requests for No-Objection and monitoring the overall process has helped promote process efficiency.
While No-Objections are required as a central tenet of MCC’s oversight processes, it is important to recognize that AE-level reviews are just as critical. To ensure document preparedness and appropriate communications internally and with key external partners, AEs are encouraged to establish an AE-level review and clearance process. In addition to internal AE reviewers, the AE-level review process may also require review, input and/or approval from external stakeholders 13 or consultants. For instance, implementing entities may be closely involved with the review and approval of design documents, resettlement action plans, consultant reports, etc. The internal process established by the AE should appropriately track and document when such external stakeholders review, provide input and/or approve documents prior to submission to MCC for No-Objection.
Similarly, AE Board of Directors (AE Board) bylaws often require approval of many different types of documents. While the Accountable Entity Guidelines (Section 3.2.E), PPG, and PGG Approvals Matrixes outline some specific items that always require AE Board approval, each AE Board establishes its own review and approval requirements. To ensure that both AE and MCC staff understand which documents require AE Board approval within a specific country, the AE should develop a clear list of items requiring AE Board approval; 14 this list should be shared widely within the AE and with the MCC Country Team.
AEs are encouraged to develop one or more process flows to outline the roles, responsibilities, and steps in the AE-level review process. When developing the process flow(s), AEs should establish the order in which reviews take place, including whether the MCC Technical Review and/or No-Objection processes are undertaken before, after or concurrently with other external stakeholder reviews. Note that the internal review process undertaken by AE staff should normally be completed before documents are submitted to MCC. For reference, a process flow explanation and example process flow diagram are included in Annexes 4 and 4a.
To facilitate the internal AE review process, the AE is encouraged to establish its own clearance matrix which establishes roles, responsibilities, and authorities for each type of document. In particular, the AE Clearance Matrix should help ensure that crosscutting sectors are appropriately engaged in the internal AE review process before requests are submitted to MCC.
There are many forms this matrix can take, and AEs are encouraged to consult the MCC Country Team Leadership if they want to learn more about different approaches. An example AE Clearance Matrix is included in Annex 5. This example focuses specifically on items that require MCC No-Objection, though AEs are encouraged to expand or modify the matrix to include rows for additional, program-specific requests that will require internal review. AEs should also review and modify the columns and designations in the matrix to fit their specific staffing structures and country circumstances. To the extent possible, AEs are also encouraged to delegate approval responsibilities below the executive level.
This matrix should be updated periodically, and could be used to assign specific roles and responsibilities for each item in the No-Objection tracker, discussed in Section III.A above.
Each request for No-Objection must include a clearance sheet indicating which AE staff have reviewed and cleared on the document prior to its submission to MCC. For an example clearance sheet, please see Annex 6.
AEs should agree with the RCM on a format for the clearance sheet. AEs should also establish clear internal processes and procedures for filling out clearance sheets and ensuring that they are submitted to MCC as part of the No-Objection request. In cases where a member of the AE had concerns and did not clear, the reason for their non-clearance, as well as the approver’s rationale for overruling their non-clearance, should be explained on the clearance sheet. Noting that MCC technical staff may raise concerns similar to those of AE technical staff, including this information on the clearance sheet provides an opportunity for MCC to consider the different perspectives when deciding whether to provide a No-Objection.
While the clearance sheets are not required for Technical Reviews, AEs are encouraged to provide MCC with information on which AE staff have provided input, and whether their feedback has been incorporated. This can help facilitate a more efficient MCC Technical Review process.
Some AEs have struggled with document management and version control. This can have significant impacts on program timelines and can negatively affect the No-Objection process. To address this issue, AEs are encouraged to use collaborative software to manage the internal document development and review processes. 15 AEs are also encouraged to establish a document management process that defines roles and responsibilities, nomenclature, and other details related to document management. 16 For additional information on how to use available software and/or establish document management systems, please consult MCC’s MCA MIS team.
With regards to specific No-Objection requests, the document owner should retain responsibility for ensuring that the correct version of the document is used for all steps in the No-Objection process.
While not all AE staff will have a direct role in the No-Objection process, it is important that all program staff have at least a basic understanding of No-Objection requirements and procedures. To facilitate this understanding, MCC and the AE should train staff at all levels, so they are informed on the No-Objection process and its implications for compact implementation, the AE’s internal review process, and their responsibilities within it, based on their specific role within the AE. Trainings can be formal and/or informal, and should be incorporated into the AE’s onboarding plans, periodic training plans, etc.
Of the following annexes, Annex 1 is the only one that conveys specific requirements, noting that the items listed there must all be submitted for No-Objection. Annexes 2 – 6 have been developed as illustrative examples of tools for managing the Technical Review and No-Objection processes. However, these are examples only, and there is no requirement for AEs to use these formats or approaches. Should AEs, together with their respective MCC Country Teams, choose to develop these tools, they may use these examples as starting points for further customization, or create something completely different.
Reducing Poverty Through Growth
1099 14th Street NW
Suite 700
Washington, DC
20005-3550
USA
- Published in Uncategorized