Credit rating definition

What is a Credit Rating?

A credit rating is a standard score assigned to an entity or its securities by a credit rating agency, based on a review of its financial condition. This review is derived from financial and operational information provided by the business, as well as information obtained from other sources.

The credit rating agencies all have different scoring systems, but the highest score generally given is AAA, and the lowest score is either C or D. If a credit rating is sufficiently high, investors will likely accept a lower effective interest rate on any debt issued by an entity, on the grounds that there is a low risk of default. Conversely, an entity given a low score may not be able to secure any debt financing at all.

Related AccountingTools Courses

Credit and Collection Guidebook

Effective Collections

A Brief History of Credit Ratings in the United States

19th Century Origins
Credit ratings in the United States began in the mid-19th century, driven by the need to assess the creditworthiness of borrowers in a rapidly industrializing economy. The first formal credit rating agency, Dun & Bradstreet, was founded in 1841. It primarily focused on evaluating the credit reliability of merchants and businesses. These ratings were essential for lenders and suppliers extending credit in an era when information was difficult to access.

Early 20th Century: Corporate Bonds
The modern credit rating industry took shape in the early 20th century with the rise of the bond market. John Moody published the first publicly available bond ratings in 1909, focusing on railroad bonds. His system categorized bonds into letter-grade rankings to signal their risk levels. This innovation was later adopted by other firms like Standard Statistics (a precursor to Standard & Poor’s, founded in 1923) and Fitch Ratings (founded in 1924).

Post-1930s: Regulatory Role
The stock market crash of 1929 and the ensuing Great Depression underscored the need for greater financial transparency and risk assessment. In response, regulatory bodies like the Securities and Exchange Commission (SEC) began to incorporate credit ratings into financial regulations. By the mid-20th century, ratings became a key tool for evaluating municipal and corporate bonds, and the major rating agencies—Moody’s, S&P, and Fitch—emerged as dominant players.

1970s: Investor-Pay Model and Criticism
In the 1970s, credit rating agencies shifted from an issuer-pay model (where borrowers paid for ratings) to an investor-pay model, aligning revenue with those using the ratings. This shift raised concerns about conflicts of interest. During this period, the SEC also introduced the concept of Nationally Recognized Statistical Rating Organizations (NRSROs), granting official status to certain rating agencies.

2000s: The Financial Crisis
The 2007–2008 financial crisis brought intense scrutiny to credit rating agencies, which were criticized for giving high ratings to complex financial instruments like mortgage-backed securities that later defaulted. This highlighted flaws in their methodologies and conflicts of interest within the issuer-pay model. In response, the Dodd-Frank Act of 2010 imposed stricter oversight and required more transparency in the rating process.

Modern Era
Today, credit ratings remain integral to global financial markets, guiding investors, regulators, and policymakers. The industry is dominated by Moody’s, S&P Global, and Fitch, collectively referred to as the "Big Three." Efforts to increase competition and improve rating accuracy continue, with new technologies and data analytics playing a growing role.

Related Articles

Credit Insurance

Credit Report

Five Cs of Credit