Featured Post

Describe A Person Who Likes To Buy Goods With Low Prices IELTSCUECARDS-VINODSHARMAIELTS

Describe a person who likes to buy goods with low prices. Introduction. Everybody loves shopping and there are many places where one can buy affordable goods. Street markets are popular destinations for buying things at a very reasonable price and the majority of people like to visit streets to buy household things. Those who are specific about the product or brand they go to brand outlets or malls. - Who this person is. Here I am going to talk about my friend Rakesh who is super conscious about saving money. He doesn't like to spend a penny extra so he does a remoteview before buying anything and finds the best deal. Rakesh is a school friend so I know him from my childhood and from his early days I also accompanied him in street markets to buy things. - What this person likes to buy Rakesh likes to buy almost everything from the streets like electronic gadgets, clothes, footwear, stationary,books, playing equipment and many more he has almost everything that is needed

IELTS READING QUESTION-TYPE PRACTICE-MATCHING SENTENCE ENDINGS

 Mini warm-up practice test – Match Sentence Endings

Complete each sentence with the correct ending A-H.

Optimism and Health

Mindset is all. How you start the year will set the template for the rest, and two scientifically

backed character traits hold the key: optimism and resilience (if the prospect leaves you feeling

pessimistically spineless, the good news is that you can significantly boost both of these qualities).

Faced with 12 months of plummeting economics and rising human distress, staunchly maintaining a rosy view might seem deludedly Pollyannaish. But here we encounter the optimism paradox. As Brice Pitt, an emeritus professor of the psychiatry of old age at Imperial College, London, told me: “Optimists are unrealistic. Depressive people see things as they really are, but that is a disadvantage from an evolutionary point of view. Optimism is a piece of evolutionary equipment that carried us through millennia of setbacks.” Optimists have plenty to be happy about. In other words, if you can convince yourself that things will get better, the odds of it happening will improve because you keep on playing the game. In this light, optimism “is a habitual way of explaining your setbacks to yourself”, reports Martin Seligman, the psychology professor and author of Learned Optimism. The research shows that when times get tough, optimists do better than pessimists - they succeed better at work, respond better to stress, suffer fewer depressive episodes, and achieve more personal goals.

Studies also show that belief can help with the financial pinch. Chad Wallens, a social forecaster at the Henley Centre who surveyed middle-class Britons’ beliefs about income, has found that “the people who feel wealthiest, and those who feel poorest, actually have almost the same amount of money at their disposal. Their attitudes and behaviour patterns, however, are different from one another.”

Optimists have something else to be cheerful about - in general, they are more robust. For example, a study of 660 volunteers by the Yale University psychologist Dr. Becca Levy found that thinking positively adds an average of seven years to your life. Other American research claims to have identified a physical mechanism behind this. A Harvard Medical School study of 670 men found that the optimists have significantly better lung function. The lead author, Dr. Rosalind Wright, believes that attitude somehow strengthens the immune system. “Preliminary studies on heart patients suggest that, by changing a per son’s outlook, you can improve their mortality risk,” she says.

Few studies have tried to ascertain the proportion of optimists in the world. But a 1995 nationwide survey conducted by the American magazine Adweek found that about half the population counted themselves as optimists, with women slightly more apt than men (53 per cent versus 48 per cent) to see the sunny side.

Of course, there is no guarantee that optimism will insulate you from the crunch’s worst effects, but the best strategy is still to keep smiling and thank your lucky stars. Because (as every good sports coach knows) adversity is character-forming - so long as you practise the skills of resilience. Research among tycoons and business leaders shows that the path to success is often littered with failure: a record of sackings, bankruptcies and blistering castigation. But instead of curling into a foetal ball beneath the coffee table, they resiliently pick themselves up, learn from their pratfalls and march boldly towards the next opportunity.

The American Psychological Association defines resilience as the ability to adapt in the face of adversity, trauma or tragedy. A resilient person may go through difficulty and uncertainty, but he or she will doggedly bounce back. Optimism is one of the central traits required in building resilience, say Yale University investigators in the. Annual Review of Clinical Psychology. They add that resilient people learn to hold on to their sense of humour and this can help them to keep a flexible attitude when big changes of plan are warranted. The ability to accept your lot with equanimity also plays an important role, the study adds.

One of the best ways to acquire resilience is through experiencing a difficult childhood, the sociologist Steven Stack reports in the Journal of Social Psychology. For example, short men are less likely to commit suicide than tall guys, he says, because shorties develop psychological defence skills to handle the bullies and mickey-taking that their lack of stature attracts. By contrast, those who enjoyed adversity free youths can get derailed by setbacks later on because they’ve never been inoculated against aggro. If you are handicapped by having had a happy childhood, then practicing proactive optimism can help you to become more resilient. Studies of resilient people show that they take more risks; 'they court failure and learn not to fear it.

And despite being thick-skinned, resilient types are also more open than average to other people. Bouncing through knock-backs is all part of the process. It’s about optimistic risk-taking - being confident that people will like you. Simply smiling and being warm to people can help. It’s an altruistic path to self interest and if it achieves nothing else, it will reinforce an age-old adage: hard times can bring out the best in you.

TEST 1 – Honey bees in trouble

Complete each sentence with the correct ending A-F.


A. Recently, ominous headlines have described a mysterious ailment, colony collapse disorder(CCD) that is wiping out the honeybees that pollinate many rops. Without honeybees, the story goes, fields will be sterile, economies will collapse, and food will be scarce.

B. But what few accounts acknowledge is that what’s at risk is not itself a natural state of affairs. For one thing, in the United States, where CCD was first reported and has had its greatest impacts, honeybees are not a native species. Pollination in modem agriculture isn’t alchemy, it’s industry. The total number of hives involved in the U.S. pollination industry has been somewhere between 2.5 million and 3 million in recent years. Meanwhile, American farmers began using large quantities of organophosphate insecticides, planted large-scale crop mono-cultures, and adopted “clean farming” practices that scrubbed native vegetation from field margins and roadsides. These practices killed many native bees outright—they’re as vulnerable to insecticides as any agricultural pest—and made the agricultural landscape inhospitable to those that remained. Concern about these practices and their effects on pollinators isn’t new—in her 1962 ecological alarm cry Silent Spring, Rachel Carson warned of a ‘Fruitless Fall’ that could result from the disappearance of insect pollinators.

C. If that ‘Fruitless Fall, has not—yet—occurred, it may be largely thanks to the honeybee, which farmers turned to as the ability of wild pollinators to service crops declined. The honeybee has been semi domesticated since the time of the ancient Egyptians, but it wasn’t just familiarity that determined this choice: the bees’ biology is in many ways suited to the kind of agricultural system that was emerging. For example, honeybee hives can be closed up and moved out of the way when pesticides are applied to a field. The bees are generalist pollinators, so they can be used to pollinate many different crops. And although they are not the most efficient pollinator of every crop, honeybees have strength in numbers, with 20,000 to 100,000 bees living in a single hive. “Without a doubt, if there was one bee you wanted for agriculture, it would be the honeybee, “says Jim Cane, of the U.S. Department of Agriculture. The honeybee, in other words, has become a crucial cog in the modem system of industrial agriculture. That system delivers more food, and more kinds of it, to more places, more cheaply than ever before. But that system is also vulnerable, because making a farm field into the photosynthetic equivalent of a factory floor, and pollination into a series of continent-long assembly lines, also leaches out some of the resilience characteristic of natural ecosystems.

D. Breno Freitas, an agronomist, pointed out that in nature such a high degree of specialization usually is a very dangerous game: it works well while all the rest is in equilibrium, but runs quickly to extinction at the least disbalance. In effect, by developing an agricultural system that is heavily reliant on a single pollinator species, we humans have become riskily overspecialized. And when the human-honeybee relationship is disrupted, as it has been by colony collapse disorder, the vulnerability of that agricultural system begins to become clear.

E. In fact, a few wild bees are already being successfully managed for crop pollination. “The problem is
trying to provide native bees in adequate numbers on a reliable basis in a fairly short number of years in
order to service the crop” Jim Cane says. “You’re talking millions of flowers per acre in a two-to
three-week time frame, or less, for a lot of crops.” On the other hand, native bees can be much more efficient pollinators of certain crops than honeybees, so you don’t need as many to do the job. For example, about 750 blue orchard bees (Osmia lignaria) can pollinate a hectare of apples or almonds, a task that would require roughly 50,000 to 150,000 honeybees. There are bee tinkerers engaged in similar work in many comers of the world. In Brazil, Breno Freitas has found that Centris tarsata, the native pollinator of wild cashew, can survive in commercial cashew orchards if growers provide a source of floral oils, such as by interplanting their cashew trees with Caribbean cherry.

F. In certain places, native bees may already be doing more than they’re getting credit for. Ecologist Rachael Winfree recently led a team that looked at pollination of four summer crops (tomato, watermelon, peppers, and muskmelon) at 29 farms in the region of New Jersey and Pennsylvania. Winfree’s team identified 54 species of wild bees that visited these crops, and found that wild bees were the most important pollinators in the system: even though managed honeybees were present on many of the farms, wild bees were responsible for 62 percent of flower visits in the study. In another study focusing specifically on watermelon, Winfree and her colleagues calculated that native bees alone could provide sufficient pollination at 90 percent of the 23 farms studied. By contrast, honeybees alone could provide sufficient pollination at only 78 percent of farms.

G. “The region I work in is not typical of the way most food is produced” Winfree admits. In the Delaware Valley, most farms and farm fields are relatively small, each fanner typically grows a variety of crops, and farms are interspersed with suburbs and other types of land use which means there are opportunities for homeowners to get involved in bee conservation, too. The landscape is a bee-friendly patchwork that provides a variety of nesting habitat and floral resources distributed among different kinds of crops, weedy field margins, fallow fields, suburban neighborhoods, and semi natural habitat like old woodlots, all at a relatively small scale. In other words, ‘pollinator-friendly’ farming practices would not only aid pollination of agricultural crops, but also serve as a key element in the overall conservation strategy for wild pollinators, and often aid other wild species as well.

H. Of course, not all farmers will be able to implement all of these practices. And researchers are suggesting a shift to a kind of polyglot agricultural system. For some small-scale farms, native bees may indeed be all that’s needed. For larger operations, a suite of managed bees—with honeybees filling the generalist role and other, native bees pollinating specific crops could be augmented by free pollination services from resurgent wild pollinators. In other words, they’re saying, we still have an opportunity to replace a risky monoculture with something diverse, resilient, and robust.

TEST 2 – Internal Market: Selling the inside

Complete each sentence with the correct ending A-E.

NB You can use any letter more than once.

When you think of marketing, you more than likely think of marketing to your customers: How can
you persuade more people to buy what you sell? But another "market" is just as important: your employees, the very people who can make the brand come alive for your customers. Yet in our work helping executives develop and carry out branding campaigns, my colleagues and I have found that companies very often ignore this critical constituency.

Why is internal marketing so important? First, because it's the best way to help employees make a
powerful emotional connection to the products and services you sell. Without that connection, employees are likely to undermine the expectations set by your advertising. In some cases, this is because they simply don't understand what you have promised the public, so they end up working at cross-purposes. In other cases, it may be they don't actually believe in the brand and feel disengaged or, worse, hostile toward the company.

We've found that when people care about and believe in the brand, they're motivated to work harder
and their loyalty to the company increases. Employees are united and inspired by a common sense of
purpose and identity.

Unfortunately, in most companies, internal marketing is done poorly, if at all. While executives
recognise the need to keep people informed about the company's strategy and direction, few understand the need to convince employees of the brand's power—they take it as a given.

Employees need to hear the same messages that you send out to the marketplace. At most companies,
however, internal and external communications are often mismatched. This can be very confusing, and it threatens employees' perceptions of the company's integrity: They are told one thing by management but observe that a different message is being sent to the public. One health insurance company, for instance, advertised that the welfare of patients was the company's number one priority, while employees were told that their main goal was to increase the value of their stock options through cost reductions. And one major financial services institution told customers that it was making a major shift in focus from being a financial retailer to a financial adviser, but, a year later, research showed that the customer experience with the company had not changed. It turned out that company leaders had not made an effort to sell the change internally, so employees were still churning out transactions and hadn't changed their behavior to match their new adviser role.

Enabling employees to deliver on customer expectations is important, of course, but it's not the only
reason a company needs to match internal and external messages. Another reason is to help push the
company to achieve goals that might otherwise be out of reach. In 1997, when IBM launched its e business campaign (which is widely credited for turning around the company's image), it chose to ignore research that suggested consumers were unprepared to embrace IBM as a leader in e-business. Although to the outside world this looked like an external marketing effort, IBM was also using the campaign to align employees around the idea of the Internet as the future of technology. The internal campaign changed the way employees thought about everything they did, from how they named products to how they organised staff to how they approached selling. The campaign was successful largely because it gave employees a sense of direction and purpose, which in turn restored their confidence in IBM's ability to predict the future and lead the technology industry. Today, research shows that people are four times more likely to associate the term "e-business" with IBM than with its nearest competitor.

Perhaps even more important, by taking employees into account, a company can avoid creating a
message that doesn't resonate with staff or, worse, one that builds resentment. In 1996, United Airlines
shelved its "Come Fly the Friendly Skies" slogan when presented with a survey that revealed the depth of customer resentment toward the airline industry. In an effort to own up to the industry's shortcomings, United launched a new campaign, "Rising," in which it sought to differentiate itself by acknowledging poor service and promising incremental improvements such as better meals. While this was a logical premise for the campaign given the tenor of the times, a campaign focusing on customers' distaste for flying was deeply discouraging to the staff. Employee resentment, ultimately made it impossible for United to deliver the improvements it was promising, which in turn undermined the "Rising" pledge.

Three years later, United decided employee opposition was under-mining its success and pulled the
campaign. It has since moved to a more inclusive brand message with the line "United," which both
audiences can embrace. Here, a fundamental principle of advertising—find and address a customer concern failed United because it did not consider the internal market.

When it comes to execution, the most common and effective way to link internal and external
marketing campaigns is to create external advertising that targets both audiences. IBM used this tactic very effectively when it launched its e-business campaign, It took out an eight-page ad in the Wall Street Journal declaring its new vision, a message directed at both customers and internal stakeholders. This is an expensive way to capture attention, but if used sparingly, it is the most powerful form of communication; in fact, you need do it only once for everyone in the company to read it. There's a symbolic advantage as well. Such a tactic signals that the company is taking its pledge very seriously; it also signals transparency—the same message going out to both audiences.

Advertising isn’t the only way to link internal and external marketing. At Nike, a number of senior
executives now hold the additional title of "Corporate Storyteller." They deliberately avoid stories of
financial successes and concentrate on parables of "just doing it," reflecting and reinforcing the company's ad campaigns. One tale, for example, recalls how legendary coach and Nike co founder Bill Bowerman, in an effort to build a better shoe for his team, poured rubber into the family waffle iron, giving birth to the prototype of Nike's famous Waffle Sole. By talking about such inventive moves, the company hopes to keep the spirit of innovation that characterises its ad campaigns alive and well within the company. But while their messages must be aligned, companies must also keep external promises a little ahead of internal realities. Such promises provide incentives for employees and give them something to live up to. In the 1980s, Ford turned "Quality Is Job 1" from an internal rallying cry into a consumer slogan in response to the threat from cheaper, more reliable Japanese cars. It did so before the claim was fully justified, but by placing it in the public arena, it gave employees an incentive to match the Japanese. If the promise is pushed too far ahead, however, it loses credibility. When a beleaguered British Rail launched a campaign announcing service improvements under the banner "We're Getting There," it did so prematurely. By drawing attention to the gap between the promise and the reality, it prompted destructive press coverage. This, in turn, demoralised staff, who had been legitimately proud of the service advances they had made.

TEST 3 – Musical Maladies

Complete each sentence with the correct ending A-F.


Music and the brain are both endlessly fascinating subjects, and as a neuroscientist specialis-ing in
auditory learning and memory, I find them especially intriguing. So I had high expecta-tions of
Musicophilia, the latest offering from neurologist and prolific author Oliver Sacks. And I confess to
feeling a little guilty re orting that my reactions to the book are mixed.

Sacks himself is the best part of Musicophilia. He richly documents his own life in the book and
reveals highly personal experiences. The photograph of him on the cover of the book— which shows him wearing headphones, eyes closed, clearly enchanted as he listens to Alfred Brendel perform Beethoven’s Pathétique Sonata—makes a positive impression that is borne out by the contents of the book. Sacks’s voice throughout is steady and erudite but never pontifical. He is neither self-conscious nor self-promoting. The preface gives a good idea of what the book will deliver. In it Sacks explains that he wants to convey the insights gleaned from the “enormous and rapidly growing body of work on the neural underpinnings of musical perception and imagery, and the complex and often bizarre disorders to which these are prone” He also stresses the importance of “the simple art of observation” and “the richness of the human context.” He wants to combine “observation and description with the latest in technology,” he says, and to imaginatively enter into the experience of his patients and subjects. The reader can see that Sacks, who has been practicing neurology for 40 years, is torn between the "old-fashioned” path of observation and the new-fangled, high tech approach:

He knows that he needs to take heed of the latter, but his heart lies with the former. The book consists
mainly of detailed descriptions of cases, most of them involving patients whom Sacks has seen in his
practice. Brief discussions of contemporary neuroscientific reports are sprinkled liberally throughout the text. Part I, “Haunted by Music,” begins with the strange case of Tony Cicoria, a nonmusical, middle-aged surgeon who was consumed by a love of music after being hit by lightning. He suddenly began to crave listening to piano music, which he had never cared for in the past. He started to play the piano and then to compose music, which arose spontaneously in his mind in a “torrent” of notes. How could this happen? Was the cause psychological? (He had had a near-death experience when the lightning struck him.) Or was it the direct result of a change in the auditory regions of his cerebral cortex? Electroencephalography (EEG) showed his brain waves to be normal in the mid-1990s, just after his trauma and subsequent "conversion” to music. There are now more sensitive tests, but Cicoria has declined to undergo them; he does not want to delve into the causes of his musicality. What a shame! Part II, “A Range of Musicality,” covers a wider variety of topics, but unfortunately, some of the chapters offer little or nothing that is new. For example, chapter 13, which is five pages long, merely notes that the blind often have better hearing than the sighted. The most interesting chapters are those that present the strangest cases. Chapter 8 is about “amusia,” an inability to hear sounds as music, and “dysharmonia,” a highly specific impairment of the ability to hear harmony, with the ability to understand melody left intact. Such specific “dissociations” are found throughout the cases Sacks recounts. To Sacks’s credit, part III, “Memory, Movement and Music,” brings us into the underappreciated realm of music therapy. Chapter 16 explains how “melodic intonation therapy” is
being used to help expressive aphasie patients (those unable to express their thoughts verbally fol-lowing a stroke or other cerebral incident) once again become capable of fluent speech. In chapter 20, Sacksdemonstrates the near-miraculous power of music to animate Parkinson’s patients and other people with severe movement disorders, even those who are frozen into odd postures. Scientists cannot yet explain how music achieves this effect.

To readers who are unfamiliar with neuroscience and music behavior, Musicophilia may be
something of a revelation. But the book will not satisfy those seeking the causes and implications of the
phenomena Sacks describes. For one thing, Sacks appears to be more at ease dis-cussing patients than
discussing experiments. And he tends to be rather uncritical in accepting scientific findings and theories. It’s true that the causes of music-brain oddities remain poorly understood.

However, Sacks could have done more to draw out some of the implications of the careful
observations that he and other neurologists have made and of the treatments that have been successful. For example, he might have noted that the many specific dissociations among components of music
comprehension, such as loss of the ability to perceive harmony but not melody, indicate that there is no
music center in the brain. Because many people who read the book are likely to believe in the brain
localisation of all mental functions, this was a missed educational opportunity.

Another conclusion one could draw is that there seem to be no "cures” for neurological problems
involving music. A drug can alleviate a symptom in one patient and aggravate it in another, or can have both positive and negative effects in the same patient. Treatments mentioned seem to be almost exclusively antiepileptic medications, which “damp down” the excitability of the brain in general; their effectiveness varies widely.

Finally, in many of the cases described here the patient with music brain symptoms is reported to
have “normal” EEG results. Although Sacks recognizes the existence of new technologies, among them far more sensitive ways to analyze brain waves than the standard neurological EEG test, he does not call for their use. In fact, although he exhibits the greatest com-passion for patients, he conveys no sense of urgency about the pursuit of new avenues in the diagnosis and treatment of music-brain disorders. This absence echoes the hook’s preface, in which Sacks expresses fear that “the simple art of observation may be lost” if we rely too much on new technologies. He does call for both approaches, though, and we can only hope that the neuro logical community will respond.

TEST 4 – Theory or Practice? —What is the point of
research carried out by biz schools?

Complete each sentence with the correct ending A-E.


Students go to universities and other academic institutions to prepare for their future. We pay tuition
and struggle through classes in the hopes that we can find a fulfilling and exciting career. But the choice of your university has a large influence on your future. How can you know which university will prepare you the best for your future? Like other academic institutions, business schools are judged by the quality of the research carried out by their faculties. Professors must both teach students and also produce original research in their own field.

The quality of this research is assessed by academic publications. At the same time, universities have
another responsibility to equip their students for the real world, however that is defined. Most students
learning from professors will not go into academics themselves—so how do academics best prepare them for their future careers, whatever that may be? Whether academic research actually produces anything that is useful to the practice of business, or even whether it is its job to do so, are questions that can provoke vigorous arguments on campus.

The debate, which first flared during the 1950s, was reignited in August, when AACSB International.
the most widely recognised global accrediting agency for business schools, announced it would consider changing the way it evaluates research. The news followed rather damning criticism in 2002 from Jeffrey Pfefler. a Stanford professor, and Christina Fong of Washington University, which questioned whether business education in its current guise was sustainable. The study found that traditional modes of academia were not adequately preparing students for the kind of careers they faced in current times. The most controversial recommendation in AACSB’s draft report (which was sent round to administrators for their comment) is that the schools should be required to demonstrate the value of their faculties’ research not simply by listing its citations in journals, but by demonstrating the impact it has in the professional world. New qualifiers, such as average incomes, student placement in top firms and business collaborations would now be considered just as important as academic publications.

AACSB justifies its stance by saying that it wants schools and faculty to play to their strengths,
whether they be in pedagogy, in the research of practical applications, or in scholarly endeavor.
Traditionally, universities operate in a pyramid structure. Everyone enters and stays in an attempt to be
successful in their academic field. A psychology professor must publish competitive research in the top
neuroscience journals. A Cultural Studies professor must send graduate students on new field research
expeditions to be taken seriously. This research is the core of a university’s output. And research of any kind is expensive—AACSB points out that business schools in America alone spend more than $320m a year on it. So it seems legitimate to ask for, 'what purpose it is undertaken?

If a school chose to specialise in professional outputs rather than academic outputs, it could use such
a large sum of money and redirect it into more fruitful programs. For example, if a business school wanted a larger presence of employees at top financial firms, this money may be better spent on a career center which focuses on building the skills of students, rather than paying for more high-level research to be done through the effort of faculty. A change in evaluation could also open the door to inviting more professionals from different fields to teach as adjuncts. Students could take accredited courses from people who are currently working in their dream field. The AACSB insists that universities answer the question as to why research is the most critical component of traditional education.

On one level, the question is simple to answer. Research in business schools, as anywhere else, is
about expanding the boundaries of knowledge; it thrives on answering unasked questions. Surely this pursuit of knowledge is still important to the university system. Our society progresses because we learn how to do things in new ways, a process which depends heavily on research and academics. But one cannot ignore the other obvious practical uses of research publications. Research is also about cementing schools’ and professors' reputations. Schools gain kudos from their faculties’ record of publication: which journals publish them, and how often. In some cases, such as with government-funded schools in Britain, it can affect how much money they receive. For professors, the mantra is often "publish or perish”. Their careers depend on being seen in the right journals.

But at a certain point, one has to wonder whether this research is being done for the benefit of the
university or for the students the university aims to teach. Greater publications will attract greater funding, which will in turn be spent on better publications. Students seeking to enter professions out of academia find this cycle frustrating, and often see their professors as being part of the "Ivory Tower” of academia, operating in a self-contained community that has little influence on the outside world.

The research is almost universally unread by real-world managers. Part of the trouble is that the
journals labour under a similar ethos. They publish more than 20,000 articles each year. Most of the research is highly quantitative, hypothesis-driven and esoteric. As a result, it is almost universally unread by realworld managers. Much of the research criticises other published research.

A paper in a 2006 issue of Strategy & Leadership commented that "research is not designed with
managers’ needs in mind, nor is it communicated in the journals they read. For the most part, it has become a self-referential closed system irrelevant to corporate performance." The AACSB demands that this segregation must change for the future of higher education. If students must invest thousands of dollars for an education as part of their career path, the academics which serve the students should be more fully incorporated into the professional world. This means that universities must focus on other strengths outside of research, such as professional networks, technology skills, and connections with top business firms around the world. Though many universities resisted the report, today’s world continues to change. The universities which prepare students for our changing future have little choice but to change with new trends and new standards.

TEST 5 – What Do Babies Know?

Complete each sentence with the correct ending A-E.

As Daniel Haworth is settled into a high chair and wheeled behind a black screen, a sudden look of
worry furrows his 9-month-old brow. His dark blue eyes dart left and right in search of the familiar
reassurance of his mother’s face. She calls his name and makes soothing noises, but Daniel senses
something unusual is happening. He sucks his fingers for comfort, but, finding no solace, his month
crumples, his body stiffens, and he lets rip an almighty shriek of distress. This is the usual expression when babies are left alone or abandoned. Mom picks him up, reassures him, and two minutes later, a chortling and alert Daniel returns to the darkened booth behind the screen and submits himself to baby lab, a unit set up in 2005 at the University of Manchester in northwest England to investigate how babies think. Watching infants piece life together, seeing their senses, emotions and motor skills take shape, is a source of mystery and endless fascination—at least to parents and developmental psychologists. We can decode their signals of distress or read a million messages into their first smile. But how much do we really know about what’s going on behind those wide, innocent eyes? How much of their understanding of and response to the world comes preloaded at birth? How much is built from scratch by experience? Such are the questions being explored at baby lab. Though the facility is just 18 months old and has tested only 100 infants, it’s already challenging current thinking on what babies know and how they come to know it.

Daniel is now engrossed in watching video clips of a red toy train on a circular track. The train
disappears into a tunnel and emerges on the other side. A hidden device above the screen is tracking
Daniel’s eyes as they follow the train and measuring the diametre of his pupils 50 times a second. As the child gets bored—or “habituated”, as psychologists call the process his attention level steadily drops. But it picks up a little whenever some novelty is introduced. The train might be green, or it might be blue. And sometimes an impossible thing happens— the train goes into the tunnel one color and comes out another. Variations of experiments like this one, examining infant attention, have been a standard tool of developmental psychology ever since the Swiss pioneer of the field, Jean Piaget, started experimenting on his children in the 1920s. Piaget’s work led him to conclude that infants younger than 9 months have no innate knowledge of how the world works or any sense of “object permanence” (that people and things still exist even when they’re not seen). Instead, babies must gradually construct this knowledge from experience. Piaget’s “constructivist” theories were massively influential on postwar educators and psychologist, but over the past 20 years or so they have been largely set aside by a new generation of “nativist” psychologists and cognitive scientists whose more sophisticated experiments led them to theorise that infants arrive already equipped with some knowledge of the physical world and even rudimentary programming for math and language. Baby lab director Sylvain Sirois has been putting these smart-baby theories through a rigorous set of tests. His conclusions so far tend to be more Piagetian: “Babies,” he says, “know nothing.” What Sirois and his postgraduate assistant Lain Jackson are challenging is the interpretation of a variety of classic experiments begun in the mid-1980s in which babies were shown physical events that appeared to violate such basic concepts as gravity, solidity and contiguity. In one such experiment, by University of Illinois psychologist Renee Baillargeon, a hinged wooden panel appeared to pass right through a box. Baillargeon and M.I.T’s Elizabeth Spelke found that babies as young as 3 1/2 months would reliably look longer at the impossible event than at the normal one. Their conclusion: babies have enough built-in knowledge to recognise that something is wrong.

Sirois does not take issue with the way these experiments were conducted. “The methods are correct
and replicable,” he says, “it’s the interpretation that’s the problem.” In a critical review to be published in the forthcoming issue of the European Journal of Developmental Psychology, he and Jackson pour cold water over recent experiments that claim to have observed innate or precocious social cognition skills in infants. His own experiments indicate that a baby’s fascination with physically impossible events merely reflects a response to stimuli that are novel. Data from the eye tracker and the measurement of the pupils (which widen in response to arousal or interest) show that impossible events involving familiar objects are no more interesting than possible events involving novel objects. In other words, when Daniel had seen the red train come out of the tunnel green a few times, he gets as bored as when it stays the same color. The mistake of previous research, says Sirois, has been to leap to the conclusion that infants can understand the concept of impossibility from the mere fact that they are able to perceive some novelty in it. “The real explanation is boring,” he says.

So how do babies bridge the gap between knowing squat and drawing triangles a task Daniel’s sister
Lois, 2 1/2, is happily tackling as she waits for her brother? “Babies have to learn everything, but as Piaget was saying, they start with a few primitive reflexes that get things going,” said Sirois. For example, hardwired in the brain is an instinct that draws a baby’s eyes to a human face. From brain imaging studies we also know that the brain has some sort of visual buffer that continues to represent objects after they have been removed a lingering perception rather than conceptual understanding. So when babies encounter novel or unexpected events, Sirois explains, “there’s a mismatch between the buffer and the information they’re getting at that moment. And what you do when you’ve got a mismatch is you try to clear the buffer. And that takes attention.” So learning, says Sirois, is essentially the laborious business of resolving mismatches. “The thing is, you can do a lot of it with this wet sticky thing called a brain. It’s a fantastic, statistical learning machine”. Daniel, exams ended, picks up a plastic tiger and, chewing thoughtfully upon its heat, smiles as if to agree.

TEST 6 – What is Meaning?

Complete each sentence with the correct ending A-H.


The end, product of education, yours and mine and everybody's, is the total pattern of reactions and
possible reactions we have inside ourselves. If you did not have within you at this moment the pattern of reactions that we call "the ability to read.” you would see here only meaningless black marks on paper. Because of the trained patterns of response, you are (or are not) stirred to patriotism by martial music, your feelings of reverence are aroused by symbols of your religion, you listen more respectfully to the health advice of someone who has “MD" after his name than to that of someone who hasn’t. What I call here a “pattern of reactions”, then, is the sum total of the ways we act in response to events, to words, and to symbols.

Our reaction patterns or our semantic habits, are the internal and most important residue of whatever
years of education or miseducation we may have received from our parents’ conduct toward us in childhood as well as their teachings, from the formal education we may have had, from all the lectures we have listened to, from the radio programs and the movies and television shows we have experienced, from all the books and newspapers and comic strips we have read, from the conversations we have had with friends and associates, and from all our experiences. If, as the result of all these influences that make us what we are, our semantic habits are reasonably similar to those of most people around us, we are regarded as "normal,” or perhaps “dull.” If our semantic habits are noticeably different from those of others, we are regarded as “individualistic" or “original.” or, if the differences are disapproved of or viewed with alarm, as “crazy.”

Semantics is sometimes defined in dictionaries as “the science of the meaning of words”— which
would not be a bad definition if people didn’t assume that the search for the meanings of words begins and ends with looking them up in a dictionary. If one stops to think for a moment, it is clear that to define a word, as a dictionary does, is simply to explain the word with more words. To be thorough about defining, we should next have to define the words used in the definition, then define the words used in defining the words used in the definition and so on. Defining words with more words, in short, gets us at once into what mathematicians call an “infinite regress”. Alternatively, it can get us into the kind of run-around we sometimes encounter when we look up “impertinence” and find it defined as “impudence," so we look up “impudence” and find it defined as “impertinence." Yet—and here we come to another common reaction pattern—people often act as if words can be explained fully with more words. To a person who asked for a definition of jazz, Louis Armstrong is said to have replied, "Man. when you got to ask what it is, you’ll never get to know,” proving himself to be an intuitive semanticist as well as a great trumpet player.

Semantics, then, does not deal with the “meaning of words” as that expression is commonly
understood. P. W. Bridgman, the Nobel Prize winner and physicist, once wrote, “The true meaning of a term is to be found by observing what a man does with it, not by what he says about it.” He made an enormous contribution to science by showing that the meaning of a scientific term lies in the operations, the things done, that establish its validity, rather than in verbal definitions. Here is a simple, everyday kind of example of “operational” definition. If you say, “This table measures six feet in length,” you could prove it by taking a foot rule, performing the operation of laying it end to end while counting, “One...two...three...four...” But if you say—and revolutionists have started uprisings with just this statement “Man is born free, but everywhere he is in chains!”—what operations could you perform to demonstrate its accuracy or inaccuracy? 

But let us carry this suggestion of “operationalism" outside the physical sciences where Bridgman
applied it, and observe what “operations” people perform as the result of both the language they use and the language other people use in communicating to them. Here is a personnel manager studying an application blank. He comes to the words “Education: Harvard University,” and drops the application blank in the wastebasket (that’s the “operation”) because, as he would say if you asked him, “I don’t like Harvard men.” This is an instance of "meaning” at work—but it is not a meaning that can be found in dictionaries.

If I seem to be taking a long time to explain what semantics is about, it is because I am trying, in the
course of explanation, to introduce the reader to a certain way of looking at human behavior. I say human responses because, so far as we know, human beings are the only creatures that have, over and above that biological equipment which we have in common with other creatures, the additional capacity for manufacturing symbols and systems of symbols. When we react to a flag, we are not reacting simply to a piece of cloth, but to the meaning with which it has been symbolically endowed. When we react to a word, we are not reacting to a set of sounds, but to the meaning with which that set of sounds has been symbolically endowed.

A basic idea in general semantics, therefore, is that the meaning of words (or other symbols) is not in
the words, but in our own semantic reactions. If I were to tell a shockingly obscene story in Arabic or
Hindustani or Swahili before an audience that understood only English, no one would blush or be angry; the story would be neither shocking nor obscene-induced, it would not even be a story. Likewise, the value of a dollar bill is not in the bill, but in our social agreement to accept it as a symbol of value. If that agreement were to break down through the collapse of our government, the dollar bill would become only a scrap of paper. We do not understand a dollar bill by staring at it long and hard. We understand it by observing how people act with respect to it. We understand it by understanding the social mechanisms and the loyalties that keep it meaningful. Semantics is therefore a social study, basic to all other social studies.

TEST 7 – Grimm’s Fairy Tales

Complete each sentence with the correct ending A-H.

The Brothers Grimm, Jacob and Wilhelm, named their story collection Children's and Household
Tales and published the first of its seven editions in Germany in 1812. The table of contents reads like an Alist of fairy-tale celebrities: Cinderella, Sleeping Beauty, Snow White, Little Red Riding Hood, Rapunzel, Rumpelstiltskin, Hansel and Gretel, the Frog King. Drawn mostly from oral narratives, the 210 stories in the Grimms' collection represent an anthology of fairy tales, animal fables, rustic farces, and religious allegories that remain unrivalled to this day.

Such lasting fame would have shocked the humble Grimms. During their lifetimes the collection sold
modestly in Germany, at first only a few hundred copies a year. The early editions were not even aimed at children. The brothers initially refused to consider illustrations, and scholarly footnotes took up almost as much space as the tales themselves. Jacob and Wilhelm viewed themselves as patriotic folklorists, not as entertainers of children. They began their work at a time when Germany had been overrun by the French under Napoleon, who were intent on suppressing local culture.

As young, workaholic scholars, single and sharing a cramped flat, the Brothers Grimm undertook the
fairy-tale collection with the goal of saving the endangered oral tradition of Germany. For much of the 19th century teachers, parents, and religious figures, particularly in the United States, deplored the Grimms' collection for its raw, uncivilized content. Offended adults objected to the gruesome punishments inflicted on the stories' villains. In the original “Snow White" the evil stepmother is forced to dance in red-hot iron shoes until she falls down dead. Even today some protective parents shy from the Grimms' tales because of their reputation for violence.

Despite its sometimes rocky reception, Children's and Household Tales gradually took root with the
public. The brothers had not foreseen that the appearance of their work would coincide with a great
flowering of children's literature in Europe. English publishers led the way, issuing high-quality picture
books such as Jack and the Beanstalk and handsome folktale collections, all to satisfy a newly literate
audience seeking virtuous material for the nursery. Once the Brothers Grimm sighted this new public, they set about refining and softening their tales, which had originated centuries earlier as earthy peasant fare. In the Grimms' hands, cruel mothers became nasty stepmothers, unmarried lovers were made chaste, and the incestuous father was recast as the devil.

In the 20th century the Grimms' fairy tales have come to rule the bookshelves of children's bedrooms.
The stories read like dreams come true: handsome lads and beautiful damsels, armed with magic, triumph over giants and witches and wild beasts. They outwit mean, selfish adults. Inevitably the boy and girl fall in love and live happily ever after. And parents keep reading because they approve of the finger-wagging lessons inserted into the stories: keep your promises, don't talk to strangers, work hard, obey your parents. According to the Grimms, the collection served as “a manual of manners". Altogether some 40 persons delivered tales to the Grimms. Many of the storytellers came to the Grimms' house in Kassel. The brothers particularly welcomed the visits of Dorothea Viehmann, a widow who walked to town to sell produce from her garden.

An innkeeper's daughter, Viehmann had grown up listening to stories from travelers on the road to
Frankfurt. Among her treasures was "Aschenputtel"—Cinderella. Marie Hassenpflug was a 20-year-old
friend of their sister, Charlotte, from a well-bred, French-speaking family. Marie's wonderful stories blended motifs from the oral tradition and from Perrault's influential 1697 book, Tales of My Mother Goose, which contained elaborate versions of "Little Red Riding Hood", "Snow White", and "Sleeping Beauty", among others. Many of these had been adapted from earlier Italian fairy tales.

Given that the origins of many of the Grimm fairy tales reach throughout Europe and into the Middle
East and Orient, the question must be asked: How German are the Grimm tales? Very, says scholar Heinz Rolleke. Love of the underdog, rustic simplicity, creative energy—these are Teutonic traits. The coarse texture of life during medieval times in Germany, when many of the tales entered the oral tradition, also coloured the narratives. Throughout Europe children were often neglected and abandoned, like Hansel and Gretel. Accused witches were burned at the stake, like the evil mother-in-law in "The Six Swans". "The cruelty in the stories was not the Grimms' fantasy", Rolleke points out. "It reflected the law-and-order system of the old times". The editorial fingerprints left by the Grimms betray the specific values of 19 th century Christian, bourgeois German society. But that has not stopped the tales from being embraced by almost every culture and nationality in the world. What accounts for this widespread, enduring popularity? Bernhard Lauer points to the "universal style" of the writing. "You have no concrete descriptions of the land, or the clothes, or the forest, or the castles. It makes the stories timeless and placeless." "The tales allow us to express 'our utopian longings'," says lack Zipes of the University of Minnesota, whose 1987 translation of the complete fairy tales captures the rustic vigour of the original text. "They show a striving for happiness that none of us knows but that we sense is possible. We can identify with the heroes of the tales and become in our mind the masters and mistresses of our own destinies. "

Fairy tales proynde a workout for the unconscious, psychoanalysts maintain. Bruno Bettelheim
famously promoted the therapeutic value of the Grimms' stories, calling fairy tales the "great comforters". By confronting fears and phobias, symbolized by witches, heartless stepmothers, and hungry wolves, children find they can master their anxieties. Bettelheim's theory continues to be hotly debated. But most young readers aren't interested in exercising their unconsciousness. The Grimm tales in fact please in an infinite number of ways. Something about them seems to mirror whatever moods or interests we bring to our reading of them. This flexibility of interpretation suits them for almost any time and any culture.

TEST 8 – Personality and appearance

Complete each sentence with the correct ending A-F.


When Charles Darwin applied to be the “energetic young man” that Robert Fitzroy, the Beagle’s captain, sought as his gentleman companion, he was almost let down by a woeful shortcoming that was as plain as the nose on his face. Fitzroy believed in physiognomy—the idea that you can tell a person’s character from their appearance. As Darwin’s daughter Henrietta later recalled, Fitzroy had “made up his mind that no man with such a nose could have energy”. This was hardly the case. Fortunately, the rest of Darwin’s visage compensated for his sluggardly proboscis: “His brow saved him.” The idea that a person’s character can be glimpsed in their face dates back to the ancient Greeks. It was most famously popularised in the late 18th century by the Swiss poet Johann Lavater, whose ideas became a talking point in intellectual circles. In Darwin’s day, they were more or less taken as given. It was only after the subject became associated with phrenology, which fell into disrepute in the late 19th century, that physiognomy was written off as pseudoscience.

First impressions are highly influential, despite the well-worn admonition not to judge a book by its cover. Within a tenth of a second of seeing an unfamiliar face we have already made a judgement about its owner’s character—caring, trustworthy, aggressive, extrovert, competent and so on. Once that snap judgement has formed, it is surprisingly hard to budge. People also act on these snap judgements. Politicians with competent-looking faces have a greater chance of being elected, and CEOs who look dominant are more likely to run a profitable company. There is also a well-established “attractiveness halo”. People seen as good-looking not only get the most valentines but are also judged to be more outgoing, socially competent, powerful, intelligent and healthy.

In 1966, psychologists at the University of Michigan asked 84 undergraduates who had never met before to rate each other on five personality traits, based entirely on appearance, as they sat for 15 minutes in silence. For three traits—extroversion, conscientiousness and openness—the observers’ rapid judgements matched real personality scores significantly more often than chance. More recently, researchers have reexamined the link between appearance and personality, notably Anthony Little of the University of Stirling and David Perrett of the University of St Andrews, both in the UK. They pointed out that the Michigan studies were not tightly controlled for confounding factors. But when Little and Perrett re-ran the experiment using mugshots rather than live subjects, they also found a link between facial appearance and personality— though only for extroversion and conscientiousness. Little and Perrett claimed that they only found a correlation at the extremes of personality.

Justin Carre and Cheryl McCormick of Brock University in Ontario, Canada studied 90 ice-hockey players. They found that a wider face in which the cheekbone-to-cheekbone distance was unusually large relative to the distance between brow and upper lip was linked in a statistically significant way with the number of penalty minutes a player was given for violent acts including slashing, elbowing, checking from behind and fighting. The kernel of truth idea isn’t the only explanation on offer for our readiness to make facial judgements. Leslie Zebrowitz, a psychologist at Brandeis University in Waltham, Massachusetts, says that in many cases snap judgements are not accurate. The snap judgement, she says, is often an “over generalisation” of a more fundamental response. A classic example of overgeneralization can be seenin predators’ response to eye spots, the conspicuous circular markings seen on some moths, butterflies and fish. These act as a deterrent to predators because they mimic the eyes of other creatures that the potential predators might see as a threat. Another researcher who leans towards overgeneralization is Alexander Todorov. With Princeton colleague Nikolaas Oosterhof, he recently put forward a theory which he says explains our snap judgements of faces in terms of how threatening they appear.

Todorov and Oosterhof asked people for their gut reactions to pictures of emotionally neutral faces,sifted through all the responses, and boiled them down to two underlying factors: how trustworthy the facelooks, and how dominant. Todorov and Oosterhof conclude that personality judgements based on people’s faces are an overgeneralization of our evolved ability to infer emotions from facial expressions, and hence a person’s intention to cause us harm and their ability to carry it out. Todorov, however, stresses that overgeneralization does not rule out the idea that there is sometimes a kernel of truth in these assessments of personality.

So if there is a kernel of truth, where does it come from? Perrett has a hunch that the link arises when our prejudices about faces turn into self-fulfilling prophecies an idea that was investigated by other researchers back in 1977. Our expectations can lead us to influence people to behave in ways that confirm those expectations: consistently treat someone as untrustworthy and they end up behaving that way. This effect sometimes works the other way round, however, especially for those who look cute. The Nobel prizewinning ethologist Konrad Lorenz once suggested that baby-faced features evoke a nurturing response. Support for this has come from work by Zebrowitz, who has found that baby-faced boys and men stimulate an emotional centre of the brain, the amygdala, in a similar way. But there’s a twist. Baby faced men are, on average, better educated, more assertive and apt to win more military medals than their mature-looking counterparts. They are also more likely to be criminals; think Al Capone. Similarly, Zebrowitz found baby faced boys to be quarrelsome and hostile, and more likely to be academic highfliers. She calls this the “ self defeating prophecy effect”: a man with a baby face strives to confound expectations and ends up overcompensating.

There is another theory that recalls the old parental warning not to pull faces, because they might freeze that way. According to this theory, our personality moulds the way our faces look. It is supported by a study two decades ago which found that angry old people tend to look cross even when asked to strike a neutral expression. A lifetime of scowling, grumpiness and grimaces seemed to have left its mark.

TEST 9 – Malaria

Complete each sentence with the correct ending A-H.

A. Approximately 300 million people worldwide are affected by malaria and between 1 and 1.5 million people die from it every year. Previously extremely widespread, malaria is now mainly confined to Africa, Asia and Latin America. The problem of controlling malaria in these countries is aggravated by inadequate health structures and poor socio-economic conditions. The situation has become even more complex over the last few years with the increase in resistance to the drugs normally used to combat the parasite that causes the disease.

B. Malaria is caused by protozoan parasites of the genus Plasmodium. Four species of Plasmodium can produce the disease in its various forms: plasmodium falciparum, plasmodium vivax, plasmodium ovale and plasmodium malaria. Plasmodium falciparum is the most widespread and dangerous of the four: untreated it can lead to fatal cerebral malaria. Malaria parasites are transmitted from one person to another by the female anopheline mosquito. The males do not transmit the disease as they feed only on plant juices. There are about 380 species of anopheline mosquito, but only 60 or so are able to transmit the parasite. Their sensitivity to insecticides is also highly variable.

C. Plasmodium develops in the gut of the mosquito and is passed on in the saliva of an infected insect each time it takes a new blood meal. The parasites are then carried by the blood into the victim’s liver where they invade the cells and multiply. After nine to sixteen days they return to the blood and penetrate the red cells where they multiply again, progressively breaking down the red cells. This induces bouts of fever and anaemia in the infected individual. In the case of cerebral malaria the infected red cells obstruct the blood vessels in the brain. Other vital organs can also be damaged often leading to the death of the patient.

D. Malaria is diagnosed by the clinical symptoms and microscopic examination of the blood. It can normally be cured by anti-malarial drugs. The symptoms - fever, shivering, pain in the joints and headache - quickly disappear once the parasite is killed. In certain regions, however, the parasites have developed resistance to certain anti-malarial drugs, particularly chloroquine. Patients in these areas require treatment with other more expensive drugs. In endemic regions where transmission rates are high, people are continually infected so that they gradually develop immunity to the disease. Until they have acquired such immunity, children remain highly vulnerable. Pregnant women are also highly susceptible since the natural defence mechanisms are reduced during pregnancy.

E. Malaria has been known since time immemorial but it was centuries before the true causes were understood. Surprisingly in view of this some ancient treatments were remarkably effective. An infusion of qinghao containing artemisinin has been used for at least the last 2000 years in China and the antifebrile properties of the bitter bark of Cinchona Ledgeriana were known in Peru before the 15th century. Quinine, the active ingredient of this potion, was first isolated in 1820 by the pharmacists. Although people were unaware of the origin of malaria and the mode of transmission, protective measures against the mosquito have been used for many hundreds of years. The inhabitants of swampy regions in Egypt were recorded as sleeping in tower-like structures out of the reach of mosquitoes, whereas others slept under nets as early as 450 B.C.

F. Malaria has social consequences and is a heavy burden on economic development. It is estimated that a single bout of malaria costs a sum equivalent to over 10 working days in Africa. The cost of treatment is between $US0.08 and $US5.30 according to the type of drugs prescribed as determined by local drug resistance. In 1987 the total cost of malaria - health care, treatment, lost production, etc. - was estimated to be $US800 million for tropical Africa and this figure is currently estimated to be more than $US1800 million.

G. The significance of malaria as a health problem is increasing in many parts of the world. Epidemics are even occurring around traditionally endemic zones in areas where transmission had been eliminated. These outbreaks are generally associated with deteriorating social and economic conditions and the main victims are underprivileged rural populations. Economic and political pressures compel entire populations to leave malaria free areas and move into endemic zones. People who are non-immune are at high risk of severe disease. Unfortunately, these population movements and the intensive urbanisation are not always accompanied by adequate development of sanitation and health care. In many areas conflict, economic crises and administrative disorganization can result in the disruption of health services. The absence of adequate health services frequently results in recourse to self-administration of drugs often with incomplete treatment. This is a major factor in the increase in resistance of the parasites to previously effective drugs.

H. The hope of global eradication of malaria was finally abandoned in 1969 when it was recognized that this was unlikely ever to be achieved. Ongoing control programs remain essential in endemic areas. In all situations control programs should be based on half a dozen objectives: provision of early diagnosis, prompt treatment to all people at risk, selective application of sustainable preventive measures, vector control adapted to the local situations, the development of reliable information on infection risk and assessment of living conditions of concerned populations. Malaria is a complex disease but it is a curable and preventable one.

TEST 10 - Placebo effect – The Power of Nothing

Complete each sentence with the correct ending A-H.

Want to devise a new form of alternative medicine? No problem. Here's the recipe. Be warm, sympathetic, reassuring and enthusiastic. Your treatment should involve physical contact, and each session with your patients should last at least half an hour. Encourage your patients to take an active part in their treatment and understand how their disorders relate to the rest of their lives. Tell them that their own bodies possess the true power to heal. Make them pay you out of their own pockets. Describe your treatment in familiar words, but embroidered with a hint of mysticism: energy fields, energy flows, energy blocks, meridians, forces, auras, rhythms and the like. Refer to the knowledge of an earlier age: wisdom carelessly swept aside by the rise and rise of blind, mechanistic science. Oh, come off it, you're saying. Something invented off the top of your head couldn't possibly work, could it?

Well yes, it could—and often well enough to earn you a living. A good living if you are sufficiently convincing or, better still, really believe in your therapy. Many illnesses get better on their own, so if you are lucky and administer your treatment at just the right time you'll get the credit. But that's only part of it. Some of the improvement really would be down to you. Not necessarily because you'd recommended ginseng rather than camomile tea or used this crystal as opposed to that pressure point. Nothing so specific. Your healing power would be the outcome of a paradoxical force that conventional medicine recognised but remains oddly ambivalent about: the placebo effect.

Placebos are treatments that have no direct effect on the body, yet still work because the patient has faith in their power to heal. Most often the term refers to a dummy pill, but it applies just as much to any device or procedure, from a sticking plaster to a crystal to an operation. The existence of the placebo effect implies that even quackery may confer real benefits, which is why any mention of placebo is a touchy subject for many practitioners of complementary and alternative medicine (CAM), who are likely to regard it as tantamount to a charge of charlatanism. In fact, the placebo effect is a powerful part of all medical care, orthodox or otherwise, though its role is often neglected and misunderstood.

One of the great strengths of CAM may be its practitioners' skill in deploying the placebo effect to accomplish real healing. "Complementary practitioners are miles better at producing non-specific effects and good therapeutic relationships," says Edzard Ernst, professor of CAM at Exeter University. The question is whether CAM could be integrated into conventional medicine, as some would like, without losing much of this power.

At one level, it should come as no surprise that our state of mind can influence our physiology: anger opens the superficial blood vessels of the face; sadness pumps the tear glands. But exactly how placebos work their medical magic is still largely unknown. Most of the scant research to date has focused on the control of pain, because it's one of the commonest complaints and lends itself to experimental study. Here, attention has turned to the endorphins, natural counterparts of morphine that are known to help control pain. "Any of the neurochemicals involved in transmitting pain impulses or modulating them might also be involved in generating the placebo response," says Don Price, an oral surgeon at the University of Florida who studies the placebo effect in dental pain.

"But endorphins are still out in front." That case has been strengthened by the recent work of Fabrizio
Benedetti of the University of Turin, who showed that the placebo effect can be abolished by a drug,
naloxone, which blocks the effects of endorphins. Benedetti induced pain in human volunteers by inflating a blood pressure cuff on the forearm. He did this several times a day for several days, using morphine each time to control the pain. On the final day, without saying anything, he replaced the morphine with a saline solution. This still relieved the subjects' pain: a placebo effect. But when he added naloxone to the saline the pain relief disappeared. Here was direct proof that placebo analgesia is mediated, at least in part, by these natural opiates.

Still, no one knows how belief triggers endorphin release, or why most people can't achieve placebo
pain relief simply by willing it. Though scientists don't know exactly how placebos work, they have
accumulated a fair bit of knowledge about how to trigger the effect. A London rheumatologist found, for example, that red dummy capsules made more effective painkillers than blue, green or yellow ones.
Research on American students revealed that blue pills make better sedatives than pink, a colour more
suitable for stimulants. Even branding can make a difference: if Aspro or Tylenol are what you like to take for a headache, their chemically identical generic equivalents may be less effective. It matters, too, how the treatment is delivered. Decades ago, when the major tranquilliser chlorpromazine was being introduced, a doctor in Kansas categorised his colleagues according to whether they were keen on it, openly sceptical of its benefits, or took a "let's try and see" attitude. His conclusion: the more enthusiastic the doctor, the better the drug performed. And this year Ernst surveyed published studies that compared doctors' bedside manners. The studies turned up one consistent finding: "Physicians who adopt a warm, friendly and reassuring manner," he reported, "are more effective than those whose consultations are formal and do not offer reassurance."

Warm, friendly and reassuring are precisely CAM's strong suits, of course. Many of the ingredients of
that opening recipe—the physical contact, the generous swathes of time, the strong hints of supernormal healing power—are just the kind of thing likely to impress patients. It's hardly surprising, then, that complementary practitioners are generally best at mobilising the placebo effect, says Arthur Kleinman, professor of social anthropology at Harvard University.


ANSWER KEYS – MATCHING SENTENCE ENDINGS

Mini warm-up practice test – Optimism and
Health
Q1. C
Q2. A
Q3. E
Q4. G
Q5.D

TEST 1 – Honey bees in trouble
Q1. B
Q2. F
Q3. E
Q4. A
Q5. D

TEST 2 – Internal Market: Selling the inside
Q1. C
Q2. C
Q3. D
Q4. A
Q5. E
Q6. B

TEST 3 – Musical Maladies
Q1. F
Q2. B
Q3. A
Q4. D

TEST 4 - Theory or Practice? —What is the
point of research carried out by biz schools?
Q1. C
Q2. D
Q3. A
Q4. B

TEST 5 – What Do Babies Know?
Q1. B
Q2. E
Q3. A
Q4. D
Q5. C

TEST 6 – What is Meaning?
Q1. B
Q2. E
Q3. G
Q4. A
Q5. D

TEST 7 – Grimm’s Fairy Tales
Q1. D
Q2. A
Q3. H
Q4. E
Q5. B

TEST 8 – Personality and appearance
Q1. D
Q2. C
Q3. F
Q4. E

TEST 9 – Malaria
Q1. B
Q2. H
Q3. G
Q4. F

TEST 10 - Placebo effect – The Power of
Nothing
Q1. D
Q2. A
Q3. G
Q4. B
Q5. H
Q6. F

Comments