The challenge of data volumes
Financial reporting for insurance companies continues to grow in complexity. Regulatory standards are evolving with a host of new and mandatory rules, thresholds, schedules, submissions and disclosures being routinely announced. The time and effort required to meet these technical requirements has ballooned. With it, there is pressure on finance executives to reduce turnaround time and increase accuracy. This is truer for some process than others. One such process is the Finance Close process in Property & Casualty (P&C) insurance. Finance Close is a key accounting activity for the industry and solving problems associated with the process are not only important but they also offer insights and cues into managing other process better.
Finance Close is notably complex and time consuming. The process determines how premiums, loss and reinsurances are ultimately recorded at the General Ledger (GL) level. The process is complex because it has dependencies on several operational systems and sub-ledgers. With increased regulatory oversight, the data associated with the Finance Close process has grown exponentially. Insurance companies today urgently need to find ways to manage this exponentially increasing data. The solution lies in a new breed of data appliances that combine data storage and analytical engines. In a business where stakeholders and regulators are demanding more detailed explanations, higher granularity of data, shorter periods for filing technical reports and financial statements, new definitions and disclosures, enhanced governance processes, external auditor testing and reviews, the only way an organization can deliver is to use database appliances to reduce stress on IT systems.
The Finance Close landscape also serves very well to explain the more general problem in the industry. An organization may collectively insure thousands of its employees for, say, a period of 10 years. Although the premium for the coverage may be paid upfront on a single day each year, the insurer cannot record the complete revenue in its GL. The premium for each individual covered needs to be broken down for each day across the 10 year period of the policy. Only the portion of the premium calculated until the current date can be recorded as revenue.
In essence, a single transaction sets off literally thousands of different records for the duration of the policy. Each record must be meticulously and accurately calculated, verified, captured and stored for retrieval. Regulatory remit may require these records to be stored for upwards of 7 years. Any service request in the future could invoke those records or a query from the Insurance Regulatory Authority (IRA) may require recovery of the records for disclosure and analysis.
Now multiple this situation with insurance policies covering the entire P&C spectrum—damage to property, worker compensations, underwriting equipment warranties and re-insurance—and you have a situation where records management, retrieval and analysis can become a nightmare using traditional tools.
Limitations of traditional data management
Traditional data management techniques require database administrators to perform a number of time-consuming tasks to define and structure the data before the database can be queried and deliver the quality and performance expected by large sets of users. As the data grows, it must be migrated to an increasing number of servers. This may initially not appear to be problematic as database management technology has been around for over three decades and has evolved considerably. However, the technology was built to manage several millions of records running into a couple of thousand gigabytes. Today, most enterprises deal with several billions of records. Insurance data has reached a point where some institutions have begun to refer to themselves as petabyte companies. Their data often takes 15 to 20 days to process. Teams providing data for the Finance Close for final posting to the GL do not have the luxury of such time.
Migrating to an increase number of servers presents several hurdles. To begin with, database administrators must work with different system vendors to get their DBMS to scale out. Secondly, as the IT sprawl associated with the data grows, performance and availability of the complex system become uncertain. Finally, a decision to migrate to multiple serves means a continuous and increasing investment in hardware, storage, software, upgrades, licenses, backups, maintenance and services.
How database appliances can take the stress out of the process
With pressure on costs and the need to ensure high system availability, database administrators are exploring the use of database appliances. A database appliance is a single-stop solution to today’s growing demand for efficient, reliable and scalable data management. These appliances tightly integrate servers, storage, operating system, software and DBMS. They have the tools to extract data and analyze it for business intelligence. In other words, the processing happens as close to the data as possible, eliminating the time required and the stress of securely shipping the data. This also means optimal performance as the software is built for the hardware. When data volumes grow, additional modular components can be simply plugged in.
The advantages of database appliance in the insurance industry cannot be overlooked. These include:
What to expect with database appliances?
In the light of the advantages, it is therefore not surprising to see a surge in interest in database appliances in the insurance business. The focus is on systems that manage between 1 terabyte and 10 terabyte of data. Most large insurance database appliance users will do well to consider systems that manage between 10 terabyte and 100 terabyte of data.
With such large data volumes, even a few seconds shaved off each database transaction can result in massive changes in capability. Processes that take two weeks because data is spread over a variety of systems can be brought down to 10 hours with the introduction of database appliances.
The fact is that decisions in the industry cannot wait two weeks. Timely reconciliation and the close of certain processes are essential to operational efficiency, minimizing financial risk and reducing the likelihood of regulatory action.
Going into the future, a thumb rule would be to bear in mind that investments in database appliances would do better if they had a wide availability of tools – hopefully from the open source space – and were built with industry specifications and standards in mind. A system that is built for purpose will certainly deliver better ROI and will be, for IT in the long run, simpler to manage.