ICD 11 – history of the development of the ICD from 1853 to 2015

The classification of disease began as a statistical study of disease.

This post looks back to the past from 1853 when William Farr (1807–1883) who was a medical statistician of the the General Register Office of England and Wales, laboured to use this imperfect classifications of disease available at the time. With the progress of preventive medicine and to embody the advances of medical science, Farr worked to secure better classifications and international uniformity in their use. Farr’s model survived as the basis of the International List of Causes of Death.

In 1983, The Bertillon Classification of Causes of Death by Jacques Bertillon (1851–1922), Chief of Statistical Services of the City of Paris was adopted as the revision to the International List of Causes of Death.

Revisions to The Bertillon or International List of Causes of Death were carried out in 1900 (ICD 1), 1910 (ICD 2) and 1920 (ICD 3).

With the lack of leadership after Bertillon’s death in 1922 and in preparation for subsequent revisions, the International Statistical Institute and the Health Organization of the League of Nations – which had taken an active interest in vital statistics, cooperated and prepared the expansion in the rubrics of the 1920 International List of Causes of Death into the Fourth (1929) and the Fifth (1938) revisions of the International List of Causes of Death.

The classification of disease remained almost wholly in relation to cause-of-death statistics.

But there was a growing need for a corresponding list of diseases, a classification of diseases for morbidity statistics.

Farr had actually recognised back in 1855 that it was also desirable to extend the cause-of-death statistics system for morbidity. It is interesting to note that 5 years later in 1860, Florence Nightingale urged the adoption of Farr’s classification of diseases for the tabulation of hospital morbidity in the paper, “Proposals for a uniform plan of hospital statistics”. Subsequently, all three revisions of ICD 1, ICD 2 and ICD 3 had adopted a parallel classification of diseases for use in statistics of sickness, however this parallel classification failed to receive general acceptance.

The International Classification of Diseases, Injuries, and Causes of Death as a single list was endorsed by the First World Health Assembly in 1948 as ICD 6. This list provided for the first time a common base for comparison of morbidity and mortality statistics that greatly facilitates coding operations.

The Seventh Revision (ICD 7) and The Eighth Revision (ICD 8) of the International Classification of Diseases were revised under the auspices of WHO in 1955 and 1965 respectively.

The Ninth Revision (ICD 9) was accepted in 1975 and included the dagger and asterisk system as an optional alternative method of classifying diagnostic statements, including information about both an underlying general disease and a manifestation in a particular organ or site.

The Tenth Revision (ICD 10) was originally scheduled for 1985, following the established 10 year interval between revisions

The WHO decided to delay ICD 10 until 1993 as it then realised the great expansion in the use of the ICD which necessitated a thorough rethinking of its structure. The WHO needed to devise a stable and flexible classification, which should not require fundamental revision for many years to come.

ICD 11 is not due until May 2015 when it is due to be presented to the World Health Assembly. As of May 2011. the Open ICD-11 Alpha Browser was open to the public for viewing and for commenting in July 2011. The ICD-11 Beta version was open to the public in the ICD revision process to make comments, make proposals, to change ICD categories, participate in field trials and assist in translating.

Below is an infographic (you can view a larger image by first clicking on the image below which will open in a new tab of your current window and then clicking again on the image in the new tab) I have designed as a display in a showcase way of all the past revisions of ICD leading to ICD 11 expected in 2015.

With the historical background of ICD and the run-up to ICD 11, I present this post as a pre-cursor to the previous post ICD 10 & ICD 11 Development – How, What, Why & When (this link will open in a new tab of your current window) and for my coming posts on ICD 11.

References:
International Statistical Classification of Diseases and Related Health Problems, Volume 2 Instruction manual 2011, 2010 edn, World Health Organization, Geneva, Switzerland

World Health Organization, 2012, Classifications, viewed 18 December 2012, < http://www.who.int/classifications/icd/revision/timeline/en/index.html >

ICD 10 & ICD 11 Development – How, What, Why & When

I have enrolled as an International Classification of Diseases, 11th Revision Beta phase participant. To participate proactively, I will have to make comments, make proposals, propose definitions of diseases in a structured way, will be given a chance to participate in Field Trials, and perhaps assist in translating ICD into other languages. This is not going to be an easy thing to do and one definitely needs knowledge of the ICD. Having worked with ICD 10, I will have to use my ICD 10 experiences and try to contribute to the Beta phase.

So here is the first post from what will be a series of posts I shall blog about as I explore what is going on in the development of ICD 11.

Below is an infographic I painted to begin my first post. The infographic (you can view a larger image by first clicking on the image below which will open in a new tab of your current window and then clicking again on the image in the new tab) summarises facts I have found from the reference list below. They are by no means exhaustive.
References:
Can, Ç 2007, Production of ICD-11:The overall revision process, viewed 20 December 2012, < http://www.who.int/classifications/icd/ICDRevision.pdf >

James, H, ICD-11 in eleven points An update, Research Centre for Injury Studies • Flinders University, Adelaide, viewed 23 December 2012, < http://dxrevisionwatch.files.wordpress.com/2012/07/harrisonslidesamdigumd2011.pdf >

International Statistical Classification of Diseases and Related Health Problems, Volume 2 Instruction manual 2011, 2010 edn, World Health Organization, Geneva, Switzerland

World Health Organization, 2012, Classifications, viewed 18 December 2012, < http://www.who.int/classifications/icd/revision/en/ >

The frequency of data analysis

A Health Information Management (HIM) / Medical Records (MR) practitioner at any HIM / MR department in any hospital knows pretty well how often his or her hospital has determined how often different sets of clinical and administrative data that are collected during or in the time closely surrounding the patient encounter, are aggregated and analysed at his or her department or in other relevant departments. Patient records, uniform billing information, and discharge data sets are the main sources of the data that go into the literally hundreds of aggregate reports or queries that are developed and used by care providers and executives in hospitals. The frequency depends on the activity or area being measured, the frequency of measurement, and the hospital’s priorities.

What can these data then tell you about the hospital and the care provided to its patients?

How can you process these data into meaningful information?

The number of aggregate reports that could be developed from patient records or other patient related information – example accounting information, is practically as you already know is limitless.

Data quality management programs are essential for clinical improvement. Thus, HIM / MR practitioners must realise there is a need for the continuous quality improvement to ensure the accuracy and completeness of data collection at their end.  HIM / MR practitioners frequently generate reports that yield data from their data collection. Such reports can then be used to help monitor patient outcomes and identify areas in which improved care is needed. However,  HIM / MR practitioners need to regularly run and act upon them to improve areas of missing or incomplete data. They must also ensure that standard operating procedures in data management processes are in place, remedy inconsistent data collection methods, or minimise missing paper records. So I guess that more training and onsite audits could help facilitate additional improvement in data quality and efficiency.

In the post Data must be aggregated, analysed, and transformed into useful information by expert individuals (this link will open in a new tab of your current browser window), I had outlined the importance of data analysis that must involve individuals who understand information management, have skills in data aggregation methods, and know how to use various statistical tools.

HIM / MR practitioners must ensure that data collection up-to-date (data currency) and must be able to relate the frequency of data analysis (timeliness) appropriate to a process under study and develop processes that match frequency of data analysis to meet the hospital’s requirements.

The categories of statistics that are routinely gathered by  HIM / MR practitioners in a hospital for data analysis include:

  1. Census statistics including the average daily census and bed occupancy rates from data collected in wards to reveal the number of patients present at any one time in a hospital.
  2. Discharge statistics like average length of stay, death rates, autopsy rates, infection rates, and consultation rates calculated from data accumulated when patients are discharged.

HIM / MR practitioners also participate in generating quality reports which may be used for the purpose of improving customer service, quality of patient care, or overall operational efficiency. Examples of aggregate data that relate to quality reports include:

  1. customer service – the average time it takes to get an appointment at a clinic and the average referral volume by the doctor
  2. quality of patient care -clinical laboratory quality control data may be analyzed weekly to meet local regulations, and patient fall data may be analyzed monthly if falls are infrequent, infection rates, unplanned returns to the operating room
  3. overall operational efficiency – cost per case, average reimbursement by Diagnosis Related Groups (DRG), and staffing levels by patient acuity

HIM / MR practitioners in a hospital routinely gather such data to produce easy-to-use ad hoc statistical reports and trend analyses reporting that is available with the hospital’s databases which gives them access to any number of summary reports based on the data elements collected during the patient encounter. Such statistics are frequently used to describe the characteristics of the patients within a hospital and also provide a basis for planning and monitoring patient services.

Here are some examples I can think of when a hospital determines how often data are aggregated and analysed, the frequency depending on the activity or area being measured, the frequency of measurement, and the hospital’s priorities.

The patient census application is needed daily to provide sufficient day-to-day operations staffing, such as nursing and food service. However, annual or monthly patient census data are needed for the facility’s strategic planning.

Hospital management often wants to know summary information about particular diseases or treatment from the disease and procedure index function generally handled as a component of the patient medical record system or the registration and discharge system. Examples of questions that might be asked are: What is the most common diagnosis in the hospital? What percentage of diabetes patients are of a particular ethic group? What is the most common procedure performed on patients admitted with gastritis (or heart attack or any other diagnosis)? Here the process under study is related to the frequency of data analysis of diseases and procedures and the retrieval of information is based on the International Classification of Diseases (ICD) and procedure codes that are collected and entered into discharge system on a daily frequency by  HIM / MR practitioners. Such summary information to meet the hospital’s internal requirements could be required for example on an ad hoc basis or daily or weekly or monthly period – which is the frequency of data analysis.

Another type of aggregate information that can be created on an ad hoc basis are register lists that generally contain the names, and sometimes other identifying information, of patients seen in a particular area of the hospital, for example numbers of patients seen in the emergency department or operating room.

Specialised trauma and tumor registries found in hospitals with high-level trauma or cancer centers are used to track information about patients over time and to collect detailed information for research purposes.

If your hospital is at the point of then what I have tried to bring in this post when (JCI, 2010 p. ) “the aggregation of data at points in time enables a hospital to judge a particular process’s stability or a particular outcome’s predictability in relation to expectations”, is truly relevant to the Joint Commission International (JCI) Standard QPS.4.1 which states that “The frequency of data analysis is appropriate to the process being studied and meets organization requirements.” if your hospital had acquired JCI accreditation status or one that is seeking JCI accreditation status.

Nevertheless, regardless of the type of hospital you work at,  HIM / MR practitioners must perform the frequency of data analysis appropriate to the process being studied and ensure that the data analysis meets their hospital’s requirements.

References:
Joint Commission International 2010, Joint Commission International Accreditation Standards For Hospitals, 4th edn, JCI, USA

Michelle, AG & Mary, JB 2011, Essentials of Health Information Management: Principles and Practices, 2nd edn, Delmar, Cengage Learning, NY, USA

Wager, KA, Frances, WL & John PG 2005, Managing health care information systems : a practical approach for health care executives,1st edn, Jossey-Bass A Wiley Imprint, San Francisco, CA, USA

Big Data – Big Data Basics

Big Data 3Vs cardboard-box-icon

This post is to continue from the introductory post Big Data – Introduction (this link will open in a new tab of your current browser window) on Big Data about the “3Vs” that define Big Data. As I researched the subject of Big Data, three terms – Volume, Velocity and Variety stood out in relation to the “3Vs” of Big Data which leads me to explain to you in this post the widely accepted definition of Big Data from Gartner (the world’s leading information technology research and advisory company) analyst Doug Laney who has characterised Big Data as “data that’s an order of magnitude greater than data you’re accustomed to.”

Accordingly, this “3Vs” model for describing Big Data spans three dimensions, data increasing in volume (amount of data), velocity (speed of data in and out), and variety (range of data types and sources).

The first dimension/characteristic, Volume is about how Ed Dumbill, program chair for the O’Reilly Strata Conference (the leading event that offers the nuts-and-bolts of building a data-driven business – the latest on the skills, tools, and technologies you need to make data work and bringing together practitioners, researchers, IT leaders and entrepreneurs to discuss big data, Hadoop, analytics, visualisation and data markets –  the people and technology driving the data revolution), describes Big Data as “data that exceeds the processing capacity of conventional database systems. The data is too big, moves too fast, or doesn’t fit the strictures of your database architectures. To gain value from this data, you must choose an alternative way to process it.”

To give you an idea of the volume of data that is increasing exponentially on an annual basis, customer transactions at Walmart is reported to estimate to more than 2.5 petabytes of data every hour. Perhaps these infographics, courtesy of the online storage site Mozy, and Cisco will help you visualise the meaning of pentabytes of data and how it expands further into zettabytes sometime into the future.

Visualizing The Pentabyte Age

Infographic credit : http://mozy.com/blog/misc/how-much-is-a-petabyte/

The Internet in 2015

Infographic credit : http://blogs.cisco.com/news/the-dawn-of-the-zettabyte-era-infographic/

Velocity, the second dimension/characteristic describes the frequency at which data is generated, captured and shared in every imaginable device that all produce torrents of data.

I am sure you have heard of a batch process that takes a chunk of data, submits a job to the server and waits for delivery of the result. In a batch process, the incoming data rate is slower than the batch processing rate but the result is useful despite the delay. For many new applications sources of data, the batch process is just not possible anymore since the speed of data creation is even more important  than the volume. The data is now real-time or nearly real-time  information streaming into the server in a continuous fashion.

The available data in the world today comes from everywhere, this Variety, the third dimension/characteristic signifies the proliferation of data types that add new data types  which no longer fits into neat, easy to consume structures of traditional transactional data, all of which exists as a by-product of ordinary  operations: those being generated by humans from posts to social media sites, digital pictures and videos, purchase transaction records, and GPS signals from cell phones, and from “sensor” data generated from computers and network devices and embedded chips used to gather climate information, from refrigerators and airplanes to bodily implants, and more.

The International Business Machines Corporation (IBM) adds Veracity as the fourth dimension of Big Data. Veracity is when the confidence of the quality (precision and accuracy) of the variety and number of information sources is doubted.

I guess this is enough to known briefly about the basics of Big Data.

References:
About 2012, O’Reilly Strata Conference, viewed 13 December 2012, < http://strataconf.com/strata2012/public/content/about >

Andrew, M & Erik, B 2012, Big Data: The Management Revolution, Harvard Business Review October 2012, Boston, MA, USA

Dave, F 2012, The 3 I’s Of Big Data, Forbes, viewed 13 December 2012,
< http://www.forbes.com/sites/davefeinleib/2012/07/09/the-3-is-of-big-data/ >

Diya, S 2012, The 3Vs that define Big Data, Data Science Central, viewed 13 December 2012, < http://www.datasciencecentral.com/forum/topics/the-3vs-that-define-big-data >

Lorraine, F, Michele, O’C,  & Victoria, W 2012, Data, Bigger Outcomes, American Health Information Management Association, viewed 18 November 2012,
< http://library.ahima.org/xpedio/groups/public/documents/ahima/bok1_049741.hcsp?dDocName=bok1_049741 >

Stefan, S 2012, The 3 V of BIG Data, Agile Commerce, viewed 13 December 2012,
< http://multichannel-retailing.com/2012/05/the-3-v-of-big-data/ >

What is big data? 2012, International Business Machines Corporation (IBM), viewed 18 November 2012, < http://www-01.ibm.com/software/data/bigdata/ >

Medical and Nursing assessments in 24 hours, updates if less than 30 days old

My purpose of writing this post is to highlight that the Medical Records Review Tool (MMRT) form contains a provision to check for compliance to “Medical assessment in 24 hours. Updates if less than 30 days old. Nursing assessment in 24 hours” documentation in a medical record during a Medical Records Review (MMR) session.

Members of a MMR session must be able to connect this provision found in the MMRT form to the Joint Commission International (JCI) Standard AOP.1.4.1 which requires that “The initial medical and nursing assessments are completed within the first 24 hours after the patient’s admission as an inpatient or earlier as indicated by the patient’s condition or hospital policy.”

However, most members of the MMR session are usually unaware of this requirement, and it is the duty of the team leader to explain this standard which requires that to begin correct treatment for a patient as quickly as possible, the initial assessments must be completed as rapidly as possible.

Members of the MMR session must be breifed that the hospital determines the time frame for completing assessments, in particular the medical and nursing assessments depending on a variety of factors including:

  1. the types of patients cared for by the hospital,
  2. the complexity and duration of their care, and
  3. the dynamics of conditions surrounding their care.

Nonetheless, it is important for the team leader to stress that all initial medical and nursing assessments must be completed within 24 hours of admission to the hospital and available for use by all those caring for the patient.

The team leader must also indentify situations when the patient’s condition indicates, that the initial medical and/or nursing assessment are conducted and available earlier and supported by a hospital policy which define that certain other patient groups are assessed sooner than 24 hours.

Such certain other patient groups who are assessed sooner than 24 hours will include:

  1. emergency patients
  2. patients seen in a consultant’s private office or other outpatient setting prior to care in the hospital as an inpatient

The above certain other patient groups will be assessed within different time frames as follows :

  1. emergency patients are assessed immediately
  2. when the initial medical assessment is conducted in a consultant’s private office or other outpatient setting prior to care in the hospital as an inpatient, it must be no older than 30 days but (i) if the medical assessment is more than 30 days old, then the medical history must be updated and the physical examination repeated and (ii) if the medical assessment is less than 30 days old but if at the time of admission there are significant changes in the patient’s condition since the assessment was first done, then they are noted in the patient’s medical record at the time of admission to inpatient status.

The team leader may include to explain the rationale why the above 30 days time frame applies when the assessment is completed in a consultant’s private office or other outpatient setting prior to care in the hospital as an inpatient. Such explanation may include (JCI, 2010 p. 80) “the critical nature of the findings, the complexity of the patient, and the planned care and treatment (for example, the review confirms the clarity of the diagnosis and any planned procedures or treatments; the presence of radiographs needed in surgery; any change[s] in the patient’s condition, such as control of blood sugar; and identifies any critical lab tests that may need repeating)”, findings by any qualified individual (medical, nursing, and other individuals and services responsible for patient care) who usually will update and/or re-examine this patient group.

Reader can relate this post to the previous post Assessments within 24 hours (this link will open in a new tab of your current browser window) on the JCI Standard AOP.1.5 which states that “Assessment findings are documented in the patient’s record and readily available to those responsible for the patient’s care.”

References:
Joint Commission International, 2010, Joint Commission International Accreditation Standards For Hospitals, 4th edn, JCI, USA