- Home
- David Shenk
The Forgetting Page 14
The Forgetting Read online
Page 14
The prince had aged very much that year. He showed marked signs of senility by a tendency to fall asleep, forgetfulness of quite recent events, remembrance of remote ones, and the childish vanity with which he accepted the role of head of the Moscow opposition.
The question is, Why was senile dementia such a popular topic when the sufferers were so few and its practical impact on society minimal?
In myth and fable, senility often intersects with immortality. In the Greek myth “Eos and Tithonus,” Eos, goddess of the dawn, asks Zeus to grant immortality to her mortal lover Tithonus. Acting like a vindictive schoolmaster intent on teaching a lesson about linguistic precision, Zeus complies in letter but not in spirit; he bestows onto Tithonus immortality—but not eternal youth. Tithonus does live on and on, but in time becomes decrepit and senile. Eos, instead of being blessed with an endless life of passion and companionship, is consigned to the role of everlasting caregiver. Unwilling to bear the endless burden, she eventually shuts Tithonus up in a box, where he remains forever, paralyzed and babbling.
In Gulliver’s Travels, Jonathan Swift amplified this ancient tale into the grisly spectacle of the Struldbruggs, a subrace of immortal beings born at random among the mortal Luggnaggians. When first told that these rare immortals exist—some eleven hundred of them, including fifty in the city he is currently visiting—the foreign traveler Lemuel Gulliver is gleeful. A society with immortals in its midst, he presumes, is virtually guaranteed to be enlightened—“Happy People who enjoy so many living Examples of ancient Virtue, and have Masters ready to instruct them in the Wisdom of all former Ages!” The immortal elites, he reasons, will provide the best possible insurance against the repeating of past mistakes. With a living history as its guide, civilization will inevitably become smarter and stronger.
But Gulliver has gotten it all wrong. After much laughter at his expense, the Luggnaggians politely explain that immortality, far from being a blessing, is in fact the worst imaginable curse. The actual lives of the Struldbruggs fell into this pattern:
They commonly acted like Mortals till about Thirty Years old, after which by Degrees they grew melancholy and dejected.… When they came to Fourscore Years, which is reckoned the Extremity of living in this Country, they had not only all the Follies and Infirmities of other old Men, but many more which rose from the dreadful Prospect of never dying. They were not only opinionative, peevish, covetous, morose, vain, talkative; but uncapable of Friendship, and dead to all natural Affection, which never descended below their Grandchildren.… They have no Remembrance of any thing but what they learned and observed in their Youth and middle Age, and even that is very imperfect.… In talking they forget the common Appellation of Things, and the Names of Persons, even of those who are their nearest Friends and Relations. For the same Reason they never can amuse themselves with reading, because their Memory will not serve to carry them from the Beginning of a Sentence to the End.
Gulliver’s Travels was completed in 1725, when Swift was fifty-eight. He had long since established himself as a powerful intellect, able to wield his severe wit as a weapon against religious and political adversaries. For all of his success, though, Swift had also long exhibited a deep personal fear—a near-obsession—with the idea that his mind would slowly fade away. As a boy, he had watched in horror as his uncle Godwin withered under the forces of senility, and Swift never seemed to let go of the dark certainty that he would follow the same course. In his middle years, on a walk with the poet-clergyman Edward Young and other friends, Swift dramatically pointed to a diseased elm tree and declared, “I shall be like that tree; I shall die first at the top.” He frequently complained in letters to friends about the quality and future of his memory. Swift’s close friend John Boyle, the fifth Earl of Orrery, later recalled that he “heard him often lament the particular misfortune to human nature, of an utter deprivation of the senses many years before a deprivation of life.” He arranged that most of his estate be devoted to the creation of a new psychiatric hospital. (Swift also complained of dizziness, nausea, and hardness of hearing all of his life, which in hindsight has been identified with some confidence as Ménière’s disease, a disorder of the inner ear. Ménière’s disease does not lead to memory loss or dementia.)
As he headed into his early sixties Swift’s prodigious intellect was still clearly intact, but written and spoken complaints of memory loss increased. In “Verses on the Death of Dr. Swift,” an autobiographical caricature written in 1731 (at age 64) about his own future demise, Swift proclaimed:
Poor gentleman, he droops apace:
You plainly find it in his face.
That old vertigo in his head
Will never leave him till he’s dead.
Besides, his memory decays;
He recollects not what he says;
He cannot call his friends to mind;
Forgets the place where last he dined.…
Finally, as Swift approached the age of seventy, his worst fears were realized. Little by little, he began to lose his memory. Aphasia also slowly set in. He complained in one letter, “I neither read nor write, nor remember, nor converse,” and in another that he could “hardly write ten lines without blunders, as you will see by the numbers of scratchings and blots before this letter is done. Into the bargain I have not one rag of memory.” He became dependent on his generous cousin Martha Whiteaway as a full-time caregiver. Friends began to address their letters directly to her, and she would respond on his behalf
Swift’s decline was clearly a steady and progressive one, chiefly centered on memory. It was not, as later claimed by William Makepeace Thackeray, Samuel Johnson, and other Swift detractors, the emergence of a full-fledged, violent “lunacy” that had always been lurking in his personality. Like Cordell Annesley’s oldest sister, Thackeray and Johnson sought to misappropriate the symptoms of dementia for their own purposes—in their case to diminish Swift’s literary stature. History reveals their cynicism, and connects them morally with every other soul who lazily and/or greedily contorts a case of senility into some general impugning of the victim’s character.
The unraveling continued. By 1740, when Swift was seventy-three, Mrs. Whiteaway reported to friends that his memory had gotten so bad he could no longer finish or correct any of his written work. Swift himself wrote in a short note to her, “I hardly understand one word that I write.” Two years after that, he no longer recognized Mrs. Whiteaway and became violently abusive of her. Even with all her affection for Swift, it was more than she could take. A housekeeper and a servant were left to care for him. A commission of friends and local officials concluded in a report in 1742 (when Swift was seventy-five) that he was “of such unsound mind and memory that he is incapable of transacting any business, or managing, conducting, or taking care either of his estate or person.” In other words, he had apparently reached the middle stages of senile dementia.
It was in June of that same year that a violent and controversial episode with a local rector, Dr. Francis Wilson, marked the end of Swift’s social interactions. There are differing accounts as to exactly what happened during the afternoon Swift spent with Wilson, a friend who was at that time seeking a high position at Dublin’s St. Patrick’s Cathedral, where Swift was dean. It is clear that Wilson came to visit Swift and persuaded him to leave his home and dine with him without the usual accompaniment of Swift’s housekeeper/aide. It is also clear that Swift drank a good deal of wine and spirits at the meal and afterward—perhaps at the strong urging of Wilson. Whatever happened between the two, whether or not Wilson was loading Swift up with liquor in order to gain his endorsement for the new post, the evening ended badly.
On the way home in a carriage, Swift became enraged with Wilson and struck him. According to Wilson’s account, Swift had suddenly and without provocation flown into “a most astonishing rage,” cried out that Wilson was the devil, hit him, scratched him, and tried to poke his eyes out. Swift’s arm was later very badly bruised. Wilson then o
rdered a stop to the carriage and cursed his companion. “You are a stupid old blockhead,” he yelled at the severely demented Swift as he fled, “and an old rascal.” Swift was taken back to his home.
By the time he got there, though, he had apparently forgotten the entire event. “Where is Dr. Wilson?” he asked his servant. “Ought not the doctor to be here this afternoon?” He could not recall even having seen him that day.
The clear intent of Swift’s Struldbrugg morality play was to illustrate the same paradox that demographers would stumble onto two centuries later: By extending our lives, we achieve suffering. Of this Swift was certain. He also seemed to know somehow that his own demise would embody this theme. The combination of his progressive dementia and a series of unrelated and excruciating physical setbacks rendered him an almost perfect icon of wretched aging. As things got worse and worse, Lord Orrery wrote to Martha Whiteaway. “I am sorry to hear his appetite is good,” he offered. “… The man I wished to live the longest, I [now] wish the soonest dead. it is the only blessing than can now befall him.”
Orrery’s ambivalence is now ours. It is estimated that one in nine baby boomers could live to be one hundred years old. For their grandparents’ generation that figure was one in five hundred. We are in essence creating a world full of Struldbruggs, people living en masse into very old age and paying the consequences for it—the “rise in frailty” and prolongation of morbidity that will inevitably accompany increasing age.
Still, we pursue longevity, as individuals and as a species, and we do so without apology. It is our natural instinct to want to live, or perhaps more accurately to desperately want not to die, and to want our friends and relatives also not to die.
How will we face this new abundance of frailty: By merely trying to conquer it all, crying out in frustration whenever we fail? Or by also seeking to reconcile with the inevitable, to decline with grace, to establish a calm acceptance of our mortality? “It is time to be old/To take in sail,” Emerson wrote in “Terminus.” “…‘The port, well worth the cruise, is near/And every wave is charmed.’” In his diminished vista, he set out to live a life of peace and acceptance, a slow and happy fade.
In order for us to make intelligent choices about how we decline, it is important to understand why death exists in the first place. Why, indeed, are we not immortal? Is there some social or biological utility to death? Most everything in the physical world seems to have a rational basis if we look closely enough. So what is the point of dying?
In 1825, a British actuary named Benjamin Gompertz noticed something peculiar in his mortality tables. Plotted on a graph, ages of death formed a graceful U. The probability of dying, he noticed, was very high at birth; it declined rapidly during the first year of life, and continued to decline up to the age of sexual maturity. From there it increased rapidly, exponentially even, until very old age. In sum, the older you got after birth, the less of a chance you had of dying—up until puberty, that is, at which point the older you got, the more of a chance you had of dying. Gompertz theorized that he had detected a hidden Law of Mortality, some sort of natural order of living and dying.
He had. Over the following two centuries, evolutionary biologists and population geneticists helped refine this law, which helped them understand the purpose of death—or, rather, its purposelessness.
Death, they came to realize, is not a part of the plan. It is not programmed into the code of life in the same way that, say, a firecracker is designed to explode. Death is also not nature’s way of clearing space for future generations. Nor is it a genetic guarantee that miscreants will not be able to make trouble into eternity. Death is not nature’s way of rationing energy so that the maximum number of individuals get a chance to live.
Rather, death is an unwanted but unavoidable by-product of life in the same way that a wood fire leaves us with a pile of carbon residue. The fundamental design of all life, according to the law of natural selection, is the continual adaptation of a species, through reproduction, to maximize long-term survival. Adaptation happens through a natural genetic variation; those variants best suited to the environment will survive the longest.
There is no particular requirement that all of these genetic variants have a built-in death spiral. It just happens that, so far, all of the best possible designs for a successful reproductive organism result in a structure that is highly vulnerable to deterioration sometime after the reproductive period. Think of a light bulb that is designed to burn very brightly; its essential purpose is its intensity, and a by-product of that intensity is that it will burn out.
A more precise analogy, one suggested by the University of Chicago’s Olshansky, is the Indianapolis 500 race car, a machine designed with one very specific goal—to get to the end of that five-hundredth mile with as much speed as possible. What happens to the car in the 501st mile and every mile thereafter is of no particular concern to the designer. Every detail of design must be aimed toward the first five hundred miles. “From an evolutionary perspective,” says Olshansky, “the race is to reproduction, which includes a time for the production of offspring, a possible child-rearing period, and for some species (for instance, human beings) a grandparenting period where parental contributions can be made to the reproductive success of their own offspring.”
The point of the analogy is that, in designing a vehicle to go a precise distance at top speed, specific choices have to be made that favor those priorities. Those choices seem to inevitably result in long-term weaknesses that will prevent the car from lasting as long as it might have. “It is important to realize,” says Olshansky, “that the cars are not intentionally engineered to fall apart—they are simply not designed to run indefinitely beyond the end of the race.”
Hence Benjamin Gompertz’s U-shaped mortality curve. The Law of Mortality dictates that every living organism makes reproduction a priority at the expense of longevity. If an individual survives the harrowing process of being born, it has a built-in maximized chance of making it to sexual potency. After that, the forces of deterioration start to overcome the forces of life.
Epidemiologists refer to the postreproductive period of life as the “genetic dustbin,” because any genetic expression that occurs only in late life is simply beyond the reach of natural selection. A gene that causes a man to be very short in stature might help or hurt his chances to adapt to the environment and to procreate. But a gene that causes a woman’s hair to turn gray when she is seventy will have no effect at all. Her genes are already passed on or not passed on, without any regard to whether that trait was a desirable one.
So-called hidden diseases, then, are hidden not just in the sense of not being visible to pre-twentieth-century human beings. More importantly, they are hidden from the forces of natural selection.
In 1998, Olshansky and his colleagues were surprised to discover something new about the Law of Mortality: that medical science had been able to alter it. Sorting through mortality data from mice, beagles, and humans who had died of natural causes, they were able to chart Gompertz U-shaped mortality curves in order to reveal each animal’s genetic schedule of mortality—what they called their “mortality signatures.” The human mortality signature turned out to be eighty-three. This means that if all of the external threats to the human body were removed, the point at which it would be most likely to simply give out averages around the age of eighty-three.
What they didn’t expect, though, was a U-shaped mortality curve with a different shape for humans than for the other animals. This meant that modern medicine has not only reduced external threats, but has also somehow overcome internal rhythms. “The change in rates,” reported Olshansky, “indicates that the intrinsic mortality signature of human beings, something that we once thought was intractable, is being modified.”
We’re pushing our bodies past their own innate limits. Due to extraordinary medical interventions in cancer, heart disease, and other conditions, humankind is now living longer than our genes would ordina
rily allow. We are outliving our own mortality signature, living on what epidemiologists call “manufactured time.” It is the cushion of extra life that we are creating for ourselves with our ingenuity and our tools.
The real challenge, of course, is to insure that this new time is something we are happy to have.
This was the second day of nearly total confusion for Ed. He was back pretending to drive the locomotives and imagining that I was one of the crew. I was asked to call the dispatcher and find out how to get back to Walla Walla.
Periodically, he asked me what Arda was doing and if I’d heard from her and telling me what a special person she is to him. (Arda is me.)
This afternoon we went for a ride, then stopped for a bite to eat. Our bill was $13.11, which Ed insisted on paying. He pulled two fifty-dollar bills out of his wallet. I told him I had the right change so I took the bill and paid it.
When we pulled up under the carport at home he said, “I’ll wait here. What are we stopping here for?” He is at least cheerful and relaxed, much like a little child waiting to be told what to do. It could be worse.
—A.B.
Walla Walla, Washington
Chapter 12
HUMANIZE THE MOUSE
In the 1980s, as researchers began to contemplate the possibility of trying to defeat Alzheimer’s disease, the search for a fitting animal model became paramount. No one could develop a successful Alzheimer’s drug without first testing it, and refining it, on animals. In order to save human lives, many thousands of nonhuman lives would first be forfeited to science. These animals would be bred according to certain desirable characteristics, kept in strictly controlled environments, examined for changes in behavior and intelligence, and ultimately sacrificed. Their brains would be taken apart, fixed in solution, sliced thinly, and examined under a microscope; or their brains would be spun in a centrifuge and analyzed chemically; or their brains would be placed in a petri dish with a variety of toxins. Collectively, these brains would serve as the proverbial drawing board onto which researchers would sketch all possible ideas for a cure.