Qualified Risk Director Guidelines Released for Better Oversight of Risk by Boards of Directors

The Directors and Chief Risk Officers group (“the DCRO”) today issued its guidance to organizations seeking to better govern risk through the identification and recruitment of Qualified Risk Directors to their boards of directors and risk committees of the board.

In the spirit of the Audit Committee Financial Expert, where specific board members are designated as experts in the analysis of financial statements and control processes, the Qualified Risk Director guidelines were developed by an international group of active board directors and chief risk officers to assist organizations in identifying members of their board with expertise in the governance of risk.

“An understanding of risk and its proper governance is not just about protecting organizations from large, unexpected losses – although that is very valuable,” said David R. Koenig, Chief Executive Officer of The Governance Fund Advisors and Executive Chair of the Qualified Risk Director governance council. “Risk governance is equally about how organizations can pursue the goals they have established, with more success,” he continued. “Qualified Risk Directors make those goals more achievable.”

The Qualified Risk Director guidelines are designed to aid in the identification and recruitment of risk governance experts to boards of directors and their risk committees, where present. They are being distributed to companies around the world and to regulators that have shown an interest in advancing the governance of risk at the board level. According to the guidelines, they are designed for voluntary adoption. However, their inclusion by regulators in ongoing assessments, or in the development of future requirements, will assist in the advancement of risk governance practices across institutions of all kinds.

The Qualified Risk Director guidelines are freely available for download.

About the Directors and Chief Risk Officers group – The DCRO is a voluntary assembly of more than 1,600 board directors, chief risk officers, and other c-level executives whose work involves the governance of risk. Members come from more than 100 countries and represent large and mid-size organizations, both for-profit and non-profit. Visit http://www.thegovernancefund.com/DCRO/ to learn more.

For more information, please contact David R. Koenig by e-mail david(dot)koenig(at)thegovfund(dot)com.

Congress is Broken – Does the Solution Come from Random Numbers?

As the 2012 elections approached, the approval rating of the U.S. Congress was somewhere between 10% and 20%. Yet, despite this, more than 90% of candidates for the U.S. House of Representatives that sought re-election, won. It’s not a secret that this paradoxical incongruity is a problem that threatens the credibility of our system of government. The result is not random, but the solution might be.

Every ten years, following the U.S. Census, the maps of districts that our representatives represent are re-drawn. While state laws on how this redistricting is achieved vary wildly, some principles are more common than others, including the need for congressional districts to be compact, contiguous, and of roughly equal population. Federal law requires that redistricting not be done in any way that disenfranchises minority groups.

But, through well-funded, well-planned, efforts suiting both major parties, these districts have often been designed to meet another, ignoble, criterion. In most states, the party that controls the state legislatures controls this redistricting process and gerrymanders the districts to their best advantage. As New York Times blogger Nate Silver notes, in this past election, 242 of 435 congressional districts were “landslide districts” in which the vote tally deviated by more than 20 percent from the presidential vote.  This is nearly double the number of districts falling into this category just twenty years ago and is primarily a result of efforts by both major parties to create “safe districts” for their party’s elected officials.

Landslide districts are safe, and the representatives elected in those districts are, in effect, elected during the primary of their party, not the general election. They are, hence, very beholden to their parties, stalwarts of which are most likely to vote in primaries, and beholden to the funders of those parties. These sources of power are known and unchanging.

Given this construct, representative performance in support of individual constituents, broadly, is not what matters. Representative performance in support of predictable sources of power is.

But, what if it was much less certain who would make up the constituency of elected representatives each time redistricting occurred? What if we took that power away from parties and used math and computers to do the work? It turns out that everything could change, if it were allowed to.

As I pondered this issue of predictable sources of power, my thoughts immediately drifted toward the use of random numbers as a way to undo the predictable nature of the current construct. Consider one concept where a state’s districts are determined entirely by a computer. The process begins by using a single census block within an existing district as a “seed” or starting point. That seed is chosen at random from existing census blocks in the current district. Next, the same thing is done for each congressional district in that state.

Once the seed census blocks are known, a random number generator would be used to determine a contiguous census block to attach to the original seed block. It may be north, south, east, west, southeast, southwest of the seed block – whatever the random number generator says to do.

Next, the computer moves on to the second congressional district in the state where enough census blocks are added via the same method until the population represented by that district is roughly equal to the first. This continues around and around until all congressional districts in the state have roughly equal number of people living in them. This is the end of round one.

To complete the redistricting of the state, the process is repeated by rounds, again and again, until no census blocks are left unassigned.

The result of taking this approach is that new congressional districts are created which are both contiguous and roughly equal in population. They are also likely to be compact. And, since there is no account taken of race – the process is random, there is no intentional disenfranchisement of any minority group. These randomly generated districts have also been designed in a process that is free from political interference.

After an election is completed using these redrawn maps, the representative of the district will not know with certainty – only with some probability – who will be determining their future election fate.  Maybe their district will be “safe” after the next redistricting, or maybe it will be competitive. Their financial sources of power may or may not be dealing with someone who they can assume will be in their seat for an extended period of time. And, party primary voters may have to hedge their bets by electing more centrist politicians if there is a threat that they might need independents and voters from the other party to keep their party in office.

In other words, their re-election fates are more likely to be based on their job performance for the whole district, not just on what one party wants them to do.

While the constitution calls for re-apportionment of the number of seats in the house, and thus redistricting, to happen every ten years, the ease of a seeded random number process could allow redistricting to happen before every election, making the sources of power even more unpredictable and forcing elected officials to work for the good of their whole district.

It seems like a good idea, doesn’t it?

Well, it turns out that it’s not quite this easy, in part, because state legislatures have taken a legislative route to try to fix the problems of gerrymandering and to instill some “values” into the process. In an excellent overview of the current state of research in the use of computers to redistrict, Micah Altman, Senior Research Scientist at Harvard University’s Institute for Quantitative Social Science and Michael P. McDonald, Associate Professor in the Department of Public and International Affairs at George Mason University look at The Promise and Perils of Computers in Redistricting.

These problems include geographical issues, consistency with existing case law, and disparate impact on minorities even if none was intended by the process, and small sample problems with random number generators, among others.

And, it may be the case that congressional districts are becoming more partisan because people are moving into areas where people like them already live. Living in “safe” districts may be something people are choosing to do “with their feet.”

Still, in what seems to be a very badly broken system – where an entity that is disapproved by 80-90% of its “customers” keeps 90% of its eligible service providers in place – it’s time to use the power of randomness to take away the power of entrenchment and parties. I, for one, would welcome this change. But, it’s only going to come about if we demand it.

David R. Koenig is the author of Governance Reimagined: Organizational Design, Risk, and Value Creation (John Wiley and Sons, 2012) and Chief Executive Officer of The Governance Fund Advisors, LLC.

Crisis Sentiment Index Improves by 5 Points in December; Still More than 40 Points Away from Normal Conditions, But Some Positive Signs Emerging

The Crisis Sentiment Index (CSI) is a regular assessment of the status of the financial/economic crisis around the world by senior executives and board members who are involved in risk governance. Reported on a scale of 0 to 100, a reading of 75 indicates “normal conditions.”

For December 2012, the Crisis Sentiment Index (CSI) improved five points to a reading of 34. This is approximately the same level at which the index stood in June 2009. At the same time is also the second highest reading since June 2011. Overall sentiment remains very cautious, but the first hints of some optimism were revealed, particularly as it relates to the potential for the U.S. economy to accelerate. Caution regarding event risk remains high.

All CSI sub-indices improved again this quarter. CSI-Credit increased back to a reading of 52, its second highest since the survey began in 2008. This sub-index maintains its place as the most positive of the sub-indices. Even at that level, though, CSI-Credit is nearly 25 points away from “normal conditions.” CSI-Banks was the most improved sub-index, moving up by seven points to a reading of 31, while CSI-Insurance improved by six points to a reading of 38, making it the second best of the sub-indices.

In our featured questions this quarter, we ask for respondents’ expectations regarding the direction that the economies of various countries and regions are likely to take. There is some hope revealed in this data. Still, a clear concentration of very low expectations, combined with one negative assessment that our respondents presciently brought forward six months ago, temper all assessments. Details on this question, each sub-index, and specific insights from respondents are contained in the full report. Should you have any questions, please don’t hesitate to contact me.

All past reports are available on the CSI home page.

Crisis Sentiment Index Worst Since March 2009, Fear at Highest Level Since Survey Began

The Crisis Sentiment Index is a quarterly assessment of the status of the financial crisis by board members, Chief Risk Officers and other C-level executives in companies around the world.

The analysis has proven quite prescient in the past.

Quick snapshot of this quarter’s results:

• The CSI for September 2011 has plunged 14 points to a reading of 23, the lowest reading since March of 2009
• CSI-Fear, the subindex that measures panic among professionals and the public has deteriorated by 21 points to a reading of 12, five points below the worst reading since we began the survey in September of 2008
• CSI-Banking has fallen 17 points to 19, the worst reading since March of 2009
• CSI-Hedge Funds has fallen 13 points to 17, also the worst reading since March of 2009
• CSI-Money Markets has fallen 15 points to 29, the worst reading since February of 2009
• CSI-Credit fell only 3 points to a reading of 51

The full report can be downloaded here and all past reports are available on the CSI home page.

Crisis Sentiment Index Rises Two Points in December

The Crisis Sentiment Index (CSI) is a regular assessment of the status of the financial/economic crisis around the world by senior executives and board members who are involved in risk management. Reported on a scale of 0 to 100, a reading of 75 indicates “normal conditions”.

During the past quarter, negative sentiment from emerging and accelerating European sovereign risks was offset by modest improvements in credit availability and a reduction in fear, according to our survey respondents. As a result the Crisis Sentiment Index (CSI) for December 2010 rose two points to a reading of 39. At this level, roughly half of the decline in the CSI that followed the start of the Greece sovereign crisis has been recovered. Still, the index remains more than 35 pointsaway from “normal conditions”.

The December CSI report can be downloaded at http://ductilibility.com/PDF/Crisis_Sentiment_Index_12_22_10.pdf.

All past CSI reports are available for download at http://ductilibility.com/Crisis_Sentiment_Index.html.

Evidence that Risk Management Adds Value

Evidence is growing that risk management adds value.

Two papers that have recently been shared with me looked very specifically to answer questions about the impact of risk management programs at firms and both have found the answer to be in the affirmative.

The first paper looks for evidence of an impact on firm value when Enterprise Risk Management (ERM) programs are in place. They find a positive relation between firm value and the implementation of ERM — roughly a 20% value premium — which is statistically and economically significant

The second paper, which Jean Hinrichs shared with me, focuses on the application of risk management models and the use of risk officers at hedge funds and finds:

– Funds in their sample that used formal models performed better in the extreme down months of 2008 and, in general, had lower exposures to systematic risk.

– Funds employing value at risk, stress testing and scenario analysis had more accurate expectations of how they would perform in a short-term equity bear market.

It is one of my professional missions that we make risk management primarily a positive contributor to better business decisions and to avoid the trap of focusing only on the loss side of the distribution and loss avoidance.

It’s good to see academic evidence is growing that a positive impact from risk management can be measured, even at this early stage of development in the risk profession. This supplements our individual practical experiences that we relay to others.

Divide and Conquer

In my recent conversations with various board members and senior risk officers, I have become more certain of the need to end the dual and conflicting roles assigned to the newly emerged Chief Risk Officer. It is not reasonable to expect a company to make its most effective use of risk capital when its best resource for such is also expected to act as a watchdog. While such an arrangement helps to deal with the Board/CEO agency problem, it simultaneously under-serves shareholders by diverting the attention of those who best understand risk from advising on how to best use it.

In the emerging role of Chief Risk Officer, several trends can be documented. First, the role has realized a quick ascendancy in the corporate hierarchy. Second, the quick ascendancy has provided both an opportunity for influence and an opportunity for blame. Third, as Board members realize the dearth of understanding of modern risk practices among most board members, there is greater reliance on a direct line to the risk-management infrastructure.

See the image below for the typical expectations of a CRO, delineated by their business enhancement or oversight functions. I contend that at each level of responsibility in this chart, there is a conflict of obligation which undermines the potential for effective address of each.


In the most effective governance structure, a Board of Directors, as a whole, will give its chief executive directives on corporate objectives and the rules by which those objectives can be pursued. The Board’s other chief duty is then to evaluate the performance of the chief executive in his/her pursuit. This clear and singular relationship between the Board and the company creates clear accountability, albeit with the aforementioned agency risk.

As a check on the agency risk, many boards are giving their corporate chief risk officers either a direct or indirect reporting line to them. While well-intended, the result is that there is now a diffusion of accountability and a perception, perhaps unintentional, that the Chief Risk Officer is now responsible for the risks of the company, while the Chief Executive Officer is responsible for the business of the company. Businesses exist to take risk. Every business decision is a risk-taking/management decision and thus the management of risk should never be separated from the management of the business.

Those engaged in Chief Risk Officer roles often bring a unique appreciation to their role of the stochastic nature of the future. This is a complementary talent in the same manner which a unique understanding of marketing, communications or customer trends complements the overall business decision making process. Yet, as long as a company’s Chief Risk Officer had divided tasks (escalation of issue or perceived ownership of all that goes wrong and effective taking of risk), neither can get the full attention. Rather, as I argued back in 2001, the ultimate evolution of the risk manager is to that of a business line advocate. The unique skills they bring are best employed by educating and providing the resources to the business lines in such a manner that the management of risk is as close in the organization to the point at which it is being originated.

If the CEO is properly incented to ensure that the company has sufficient risk management resources, and is expected to report to the Board on a regular basis how such is being achieved, the CRO is freed to pursue the most effective use of “risk capital” for the company.

The agency problem still exists, though, and boards cannot ignore it. To deal with this, the creation of a Board Chief Risk Officer, whose task it is to randomly audit elements of the company’s risk infrastructure for consistency with the reports of the CEO is warranted. The reporting line is direct to the Chair of the Board, or the Lead Independent Director. There is no confusion about their responsibility and there are no conflicting objectives in their job description.

The Board Chief Risk Officer is in effect and internal-external audit role. The BCRO’s job is to sample, test and report. It is not accountable for things that go wrong, as that is the CEO’s accountability. But, it is accountable for reporting and affirming whether the reports of the CEO to the Board regarding the management of risk are accurate. In fact, such could be codified in regulation.

The image below shows how the conflicting duties of the current CRO have been divided among these two roles:


Note, you can listen to a webinar where I expand on this in the context of networked and distributive governance. My presentation is in the first 30 minutes of the session, with some Q&A at the end.