text
stringlengths 121
672k
|
|---|
Mental and Sensory Trickery
As machine learning produces virtual reality that feels more real than ever, the divide separating “human” and “machine” is shrinking. We are teaching AI to beat us at our own games, and it’s proving to be a limitlessly powerful student. The world in which the phrases “seeing is believing” and “show me” mean something is receding in the rearview mirror rapidly, giving way to a reality where you can’t always trust your sensory data. Which — like any other data — can be hacked or faked. Though, in this case it could be at your own direction — and to your advantage.
The human brain evolved to keep us safe in a world of predatory animals, deep, unnavigable waters, high cliffs, and sharp edges. Optimization for survival and reproduction of our genes demanded accurate sensory input, and generations of reliance on that data led to hard-wired fears that feel as real as anything we experience. Something moving fast at the periphery of your field of vision startles; the sight of a sheer drop opening into yawning space causes the heart to pound. Danger! Avoid, survive.
Now, we have learned that we can work, learn, and play at our highest levels by tricking our minds into perceiving what isn’t there. Like a wizened mentor, turning the lessons inward to the self, we shape the lesson and provide the interpretation for our own brains, translated into unmistakable sensory data. In other words, we provide ourselves with learning opportunities that are based in sensory experiences.
Smart sensory devices are changing the depth to which we experience virtual reality. Oculus earbuds, for example, along with acoustic filtration apps like H_ _ r provide a sense of true immersion in a virtual environment — something that’s only been recently made possible. It’s also possible now to smell things without a nose, thanks to advancements in “artificial olfaction” technology produced by companies like eNose. We can taste things that aren’t there and even “send” them to each other online for tasting. If these artificial sensory experiences were to work in tandem, we might not be able to tell the difference between virtuality and the real thing beyond actually telling ourselves what is real and what isn’t.
In fact, virtual experiences may soon provide more sensory data than we can get by any conventional means; an even more “realistic” experience than reality. This would be even more powerful, as TechCrunch points out, with the help of chemical stimulation strengthening the synapses that cement our memories.
The irony of relying on our brains to remind ourselves of what’s real—precisely because we know we won’t be able to trust whatever data our brains themselves come up with—is itself amusing. One of our most fascinating cyborg moments of the coming years may be the merging of technologies with the human body in the pursuit of more realistic virtual experiences.
Our Virtual Future: The Virtual Reality Headsets of Today and Tomorrow
Click to View Full Infographic
Opening Up New Worlds
The implications of the abilities we’ve developed to trick our brains are more than amusing. They are changing the way we learn, work, and relate to each other as well, not to mention motivating us to learn how to recapture neuroplasticity and use it to our advantage. The quest for the perfect virtual experience is opening up new worlds for anyone who’d like to experience them.
Right now it’s possible to drive through sandcastles or map enemy territory. We are using VR to learn and teach, taking trips through museums, and learning how human organs work. VR is changing the way we work, too: easing pain for patients and allowing developers to prototype apps. Even porn is going virtual — because if we’re going to take work to the next level, play is definitely coming with it.
All of these things would be little more than novelties if our technologies were not so adept at tricking our sense. Thanks to better tech and improved sensory swindling, though, each of these virtual applications holds deeper meaning for us as a species.
|
On May 15, the House Agriculture Committee passed its 2013 farm bill. The bill would cut the Supplemental Nutrition Assistance Program (SNAP, formerly known as the Food Stamp Program) by almost $21 billion over the next decade, eliminating food assistance to nearly 2 million low-income people, mostly working families with children and senior citizens.
The bill’s SNAP cuts would come on top of an across-the-board reduction in benefits that every SNAP recipient will experience starting November 1, 2013.
The Supplemental Nutrition Assistance Program’s (SNAP) primary purpose is to increase the food purchasing power of eligible low-income households in order to improve their nutrition and alleviate hunger and malnutrition. The program’s success in meeting this core goal has been well documented. Less well understood is the fact that the program has become quite effective in supporting work and that its performance in this area has improved substantially in recent years.
The Supplemental Nutrition Assistance Program (SNAP, formerly known as the Food Stamp Program) is the nation’s most important anti-hunger program. In 2012, it helped almost 47 million low-income Americans to afford a nutritionally adequate diet in a typical month.
Nearly 72 percent of SNAP participants are in families with children; more than one-quarter of participants are in households with seniors or people with disabilities.
SNAP is the nation’s most important anti-hunger program.
This chartbook highlights some of the key characteristics of the almost 47 million people using the program as well as trends and data on program administration and use.
Related: SNAP is Effective and Efficient
SNAP, the nation’s most important anti-hunger program, helps roughly 35 million low-income Americans to afford a nutritionally adequate diet. WIC — short for the Special Supplemental Nutrition Program for Women, Infants, and Children — provides nutritious foods, information on healthy eating, and health care referrals to about 8 million low-income pregnant and postpartum women, infants, and children under five. The School Lunch and School Breakfast programs provide free and reduced-price meals that meet federal nutritional standards to over 22 million school children from low-income families.
- Introduction to SNAP
The Center designs and promotes polices to make the Food Stamp Program more adequate to help recipients afford an adequate diet, more accessible to eligible families and individuals, and easier for states to administer. We also help states design their own food stamp programs for persons ineligible for the federal program. Our work on the WIC program includes ensuring that sufficient federal funds are provided to serve all eligible applicants and on helping states contain WIC costs. Our work on child nutrition programs focuses on helping states and school districts implement recent changes in how they determine a child's eligibility for free or reduced-priced school meals.
May 17, 2013
Revised May 16, 2013
Updated May 8, 2013
Revised May 1, 2013
Updated May 1, 2013
- View All By Date
|
Religion Subject Guide
guides are designed to help students begin the research process,
find reputable sources, and save time.
the Library Catalog for Books & Other Materials
To locate books
and other materials in CCSF Libraries, select the Library
Catalog from the Library's Homepage. You will notice
many ways to search, such as Title, Author, Subject, Subject Keyword,
Examples of Subject searches include:
Religion, Religion and Politics, Religion and Science, Buddhism, Islam, Islam – United States, Prayer, Aguaruna Indians -- Religion
include books, sample tests, class notes, and other items that instructors
put at the library for class use. The check out time is shorter
than regular circulating books.
To search for
a book on reserve in the Library
Catalog, select either Reserves
by Course or Reserves by Instructor.
When you have
located the materials, write down the Call Number and Title and
present this to a staff person at the Circulation Desk.
the Library Collection
the Library are shelved by call number according to the Library
of Congress classification system. Books are arranged on
the shelves by subject.
areas in the collection to find materials on religion include:
CALL NUMBER RANGE
|Religions. Mythology. Rationalism
|Islam, Bahai Faith. Theosophy, etc
Reference books provide background information and overviews on a given topic. Relevant reference books for religion include:
Man, myth, and magic: the illustrated encyclopedia of mythology, religion, and the unknown. Richard Cavendish, editor in chief. Washington, DC: American Psychological Association, 2000. BF 31 E52 2000 Vols. 1-8 Rosenberg Reference
The Oxford dictionary of world religions. John Bowker, ed. New York: Oxford University Press, 1997. BL 31 084 1997 Rosenberg Reference
Atlas of the world’s religions. Ninian Smart, ed. New York: Oxford University Press, 1999. G 1046 .E4 A8 1999 Rosenberg Reference.
Taking sides. Clashing views on controversial issues in religion. Daniel K. Judd, ed. Guilford, CT: McGraw-Hill, Dushkin, c2003. H61 .T3577 2003 Rosenberg Reference.
The encyclopedia of American religious history. Edward L. Queen. New York, NY: Facts On File, c1996. BL 2525 .Q44, 1996 Rosenberg Reference
Encyclopedia of American religion and politics . Paul A. Djupe and Laura R. Olson. New York: Facts On File, c2003. BL 2525 .D58 2003 Rosenberg Reference.
Electronic Reference Sources from the CCSF Ebooks collection
For more information about eBooks go to: http://www.ccsf.edu/library/ebooks.html
Encyclopedia of religious rites, rituals, and festivals 2004
Encyclopedia of new religious movements 2006
Encyclopedia of women and religion in North America 2006
A popular Dictionary of Buddhism 1997
Who's who in the Old Testament together with the Apocrypha 2002
The Quran : an encyclopedia 2006
for Articles in Periodical Databases
databases group together journal, magazine, and newspaper articles
by subject. They also usually provide abstracts (brief summaries)
and the full text of the articles. Do you need help identifying
the differences between scholarly
journal v. popular magazine articles?
databases are part of the private, passworded Web, so you
will need to have a current CCSF ID card with a barcode to access
those that CCSF subscribes to. All current CCSF Student ID cards should already have a barcode.
More information about obtaining a library bar code.
Infotrac is a brand name for several databases with coverage from 1980 to the present. Most relevant for religion topics is InfoTrac’s Religion & Philosophy database which covers topics in the areas of both religion and philosophy. InfoTrac’s Expanded Academic ASAP and OneFile databases also have useful materials. These databases let you limit your results to articles only from scholarly journals by checking the box "Refereed titles."
Literature Resource Center
Literature Resource Center has traditional reference works, critical information on authors and their works, and current journal articles. It is valuable for biographical information on authors of works relating to religion, and for critical information on both an individual work and an author’s body of work
CQ provides lengthy research reports written by the editorial staff of the Congressional Quarterly Co. There are many reports related to religion. Some examples are Religion in America, Evolution versus Creationism, Prayer and Healing, and Religious Persecution. There are also reports on related moral or ethical issues such as issues of reproductive ethics, teaching values, assisted suicide, designer humans, and the ethics of war.
Ethnic Newswatch is comprised of newspapers, magazines and journals of the ethnic, minority and native press in America. Search here for ethnic aspects of topics related to religion.
Below are some examples of academic/scholarly web sites on Religion.
If you use a search engine, such as Google,
remember to evaluate
the quality of the results.
Web Resources for General Reference
Virtual Religion Index
An extensive and well organized index of Web resources with useful annotations that speed the targeting and process of research. From Rutgers University.
Voice of the Shuttle – Religious Studies Page
Links to resources including general studies, specific religions (Christianity, Judaism, Islam, Buddhism, Sikkism, etc.), nonreligious views (Atheism, Agnosticism), issues of law and religion, society and religion, and religious studies courses and departments. From the University of California, Santa Barbara.
Encyclopedia of Religion and Society
“Full text online of the Encyclopedia, with table of contents, covering the spectrum of religions.” The Encyclopedia of Religion and Society is from the Hartford Institute for Religious Research, and its editorial board is comprised of a number of respected sociologists of religion.
The Internet Sacred Text Archive
“a freely available archive of electronic texts about religion, mythology, legends and folklore, and occult and esoteric topics” Particular focus is on believers’ (defined very broadly) sacred texts, including both primary and secondary materials.
Content Evaluation Guidelines
Advice from the Medical Library Association
Webpages: Techniques to Apply and Questions to Ask
A UC Berkeley - Teaching Library Internet Workshops
Analyzing Information Sources
Developed by Olin-Kroch-Uris Libraries at Cornell University.
and Citing Source
A quick and easy check list to use when determining the quality
of web documents. Prepared by Librarians at CCSF.
help you may contact the Reference Desk by phone at (415)
452-5543 or stop by the East and West reference desks at the Rosenberg
and Citing Information Sources
Electronic Reference Service to CCSF students, faculty, staff and
registered community users. Use this service when you are NOT in
a CCSF library.
and Web Research Workshops
FIfty minute workshops are given throughout the semester on effective
methods in searching for books, articles and information on the
Several useful sources for evaluating the quality of web pages, how
to prepare citations for a "Bibliography" or "Works Cited" list, and
how to avoid plagiarism.
Online Writing Lab
One of the most thorough and easy to navigate writing labs avaialble!
Research and Writing
Hosted by the Internet Public
Process @ CSU
Colorado State University developed these guides which "focus on a
range of composing processes as well as issues related to the situations
in which writers find themselves."
comments or suggestions to:
| Library Home
Copyright Library & Learning Resource Center, City College of San
Last updated September 13, 2007
|
CDC's Office on Disability and Health focuses on the prevention of secondary conditions and health promotion among persons with disabilities. Emphasis is on scientific support for surveillance of disabilities, cost-effectiveness of prevention strategies focused on secondary conditions and health promotion activities, and identifying risk and protective factors for secondary conditions. This is implemented through providing funds to states for public health activities addressing the needs of persons with disabilities. The program emphasizes secondary conditions which cross diagnostic categories, and focus on broader disability areas. This is a relatively new approach to prevention programs for CDC, which historically focused on the primary prevention of disabling conditions. The program is focusing on activities that will enhance the ability to measure performance in this new area. This performance measure reflects a first step toward building a data collection system that will enable CDC to monitor trends related to health and quality of life among people with disabilities.
Performance Goals and Measures
Performance Goal: By 2002, a national network will exist that will provide all states with better access to data on disabilities for their use in analyzing the needs of people with disabling conditions.
|FY Baseline||FY 1999 Appropriated||FY 2000 Estimate|
|0 (1997).||By 1999, the number of states who have begun using the Behavioral Risk Factor Surveillance Survey (BRFSS) disability module will be increased to 15.||By 2000, the number of states who have begun using the Behavioral Risk Factor Surveillance Survey (BRFSS) disability module will be increased to 25.|
Currently, there is not a data collection system in place that could be used to measure outcomes that focus on actual improvements in the quality of life of people with disabling conditions. As a result, the performance measure that has been selected for this program involves the nationwide implementation of a data collection system by the year 2002. We believe that, although challenging, nationwide implementation of the BRFSS' disability module by 2002 is feasible. However, this represents a change in direction for CDC's disabilities program, which previously focused on preventing primary disabilities. As part of on-going strategic planning efforts, the program has refocused its efforts on promoting health and improving quality of life among people with disabilities. 1997 is the first year that CDC has funded states to address these issues. As a result, the program is focusing on activities that will enhance the ability to measure performance in this new area. Tracking of the implementation of this data collection system will be accomplished through a requirement that all CDC state grantees report on whether they are utilizing the module. The cost of this data collection effort will be minimal.
Verification/Validation of Performance Measures: This performance measure will be verified by reviews of the reports required by cooperative agreement recipients.
Links to DHHS Strategic Plan
This objective is closely linked to DHHS Goal 5: Improve public health systems.
|
NOAA scientists agree the risks are high, but say Hansen overstates what science can really say for sure
Jim Hansen at the University of Colorado’s World Affairs Conference (Photo: Tom Yulsman)
Speaking to a packed auditorium at the University of Colorado’s World Affairs Conference on Thursday, NASA climatologist James Hansen found a friendly audience for his argument that we face a planetary emergency thanks to global warming.
Despite the fact that the temperature rise has so far been relatively modest, “we do have a crisis,” he said.
With his characteristic under-stated manner, Hansen made a compelling case. But after speaking with two NOAA scientists today, I think Hansen put himself in a familiar position: out on a scientific limb. And after sifting through my many pages of notes from two days of immersion in climate issues, I’m as convinced as ever that journalists must be exceedingly careful not to overstate what we know for sure and what is still up for scientific debate.
Crawling out on the limb, Hansen argued that global warming has already caused the levels of water in Lake Powell and Lake Mead — the two giant reservoirs on the Colorado River than insure water supplies for tens of millions of Westerners — to fall to 50 percent of capacity. The reservoirs “probably will not be full again unless we decrease CO2 in the atmosphere,” he asserted.
Hansen is arguing that simply reducing our emissions and stabilizing CO2 at about 450 parts per million, as many scientists argue is necessary, is not nearly good enough. We must reduce the concentration from today’s 387 ppm to below 35o ppm.
“We have already passed into the dangerous zone,” Hansen said. If we don’t reduce CO2 in the atmosphere, “we would be sending the planet toward an ice free state. We would have a chaotic journey to get there, but we would be creating a very different planet, and chaos for our children.” Hansen’s argument (see a paper on the subject here) is based on paleoclimate data which show that the last time atmospheric CO2 concentrations were this high, the Earth was ice free, and sea level was far higher than it is today.
“I agree with the sense of urgency,” said Peter Tans, a carbon cycle expert at the National Oceanic and Atmospheric Administration here in Boulder, in a meeting with our Ted Scripps Fellows in Environmental Journalism. “But I don’t agree with a lot of the specifics. I don’t agree with Jim Hansen’s naming of 350 ppm as a tipping point. Actually we may have already gone too far, except we just don’t know.”
A key factor, Tans said, is timing. “If it takes a million years for the ice caps to disappear, no problem. The issue is how fast? Nobody can give that answer.”
Martin Hoerling, a NOAA meteorologist who is working on ways to better determine the links between climate change and regional impacts, such as drought in the West, pointed out that the paleoclimate data Hansen bases his assertions on are coarse. They do not record year-to-year events, just big changes that took place over very long time periods. So that data give no indication just how long it takes to de-glaciate Antarctica and Greenland.
Hoerling also took issue with Hansen’s assertions about lakes Powell and Mead. While it is true that “the West has had the most radical change in temperature in the U.S.,” there is no evidence yet that this is a cause of increasing drought, he said.
Flows in the Colorado River have been averaging about 12 million acre feet each year, yet we are consuming 14 million acre feet. “Where are we getting the extra from? Well, we’re tapping into our 401K plan,” he said. That would be the two giant reservoirs, and that’s why their water levels have been declining.
“Why is there less flow in the river?” Hoerling said. “Low precipitation — not every year, but in many recent years, the snow pack has been lower.” And here’s his almost counter-intuitive point: science shows that the reduced precipitation “is due to natural climate variability . . . We see little indication that the warming trend is affecting the precipitation.”
In my conversation with Tans and Hoerling today, I saw a tension between what they believe and what they think they can demonstrate scientifically.
“I like to frame the issue differently,” Tans said. “Sure, we canot predict what the climate is going to look like in a couple of dcades. There are feedbacks in the system we don’t understand. In fact, we don’t even know all the feedbacks . . . To pick all this apart is extremely difficult — until things really happen. So I’m pessimistic.”
There is, Tans said, “a finite risk of catastrophic climate change. Maybe it is 1 in 6, or maybe 1 in 20 or 1 in 3. Yet if we had a risk like that of being hit by an asteroid, we’d know what to do. But the problem here is that we are the asteroid.”
Tans argues that whether or not we can pin down the degree of risk we are now facing, one thing is obvious: “We have a society based on ever increasing consumption and economic expectations. Three percent growth forever is considered ideal. But of course it’s a disaster.”
Hoerling says we are living like the Easter Islanders, who were faced with collapse from over consumption of resources but didn’t see it coming. Like them, he says, we are living in denial.
“I think we are in that type of risk,” Tans said. “But is that moving people? It moves me. But I was already convinced in 1972.”
|
Study promoter activity using the Living Colors Fluorescent Timer, a fluorescent protein that shifts color from green to red over time (1). This color change provides a way to visualize the time frame of promoter activity, indicating where in an organism the promoter is active and also when it becomes inactive. Easily detect the red and green emissions indicating promoter activity with fluorescence microscopy or flow cytometry.
Easily Characterize Promoter Activity
The Fluorescent Timer is a mutant form of the DsRed fluorescent reporter, containing two amino acid substitutions which increase its fluorescence intensity and endow it with a distinct spectral property: as the Fluorescent Timer matures, it changes color—in a matter of hours, depending on the expression system used. Shortly after its synthesis, the Fluorescent Timer begins emitting green fluorescence but as time passes, the fluorophore undergoes additional changes that shift its fluorescence to longer wavelengths. When fully matured the protein is bright red. The protein’s color shift can be used to follow the on and off phases of gene expression (e.g., during embryogenesis and cell differentiation).
Fluorescent Timer under the control of the heat shock promoter hsp16-41 in a transgenic C. elegans embryo. The embryo was heat-shocked in a 33°C water bath. Promoter activity was studied during the heat shock recovery period. Green fluorescence was observed in the embryo as early as two hr into the recovery period. By 50 hr after heat shock, promoter activity had ceased, as indicated by the lack of green color.
pTimer (left) is primarily intended to serve as a convenient source of the Fluorescent Timer cDNA. Use pTimer-1 (right) to monitor transcription from different promoters and promoter/ enhancer combinations inserted into the MCS located upstream of the Fluorescent Timer coding sequence. Without the addition of a functional promoter, this vector will not express the Fluorescent Timer.
Detecting Timer Fluorescent Protein
You can detect the Fluorescent Timer with the DsRed Polyclonal Antibody.
You can use the DsRed1-C Sequencing Primer to sequence wild-type DsRed1 C-terminal gene fusions, including Timer fusions.
Terskikh, A., et al. (2000) Science290(5496):1585–1588.
|
Brain Matures a Few Years Late in ADHD
but Follows Normal Pattern
A 2007 press release from the National Institute of Mental Health discusses brain development in ADHD youths. In some cases, brain development is delayed as much as three years. The full release and related video are available on the NIMH site: Brain Matures a Few Years Late in ADHD, but Follows Normal Pattern.
Autistic Spectrum Disorders (ASD):
How to Help Children with Autism Learn
From Dr. Lauer and Dr. Beaulieu's talk
Quick facts about Pervasive Developmental Disorders (PDD)/ Autistic Spectrum Disorders (ASD)
- Autism is a 'spectrum disorder' meaning that it affects children in different ways and at different times in their development.
- Typically, delays and learning problems can emerge in several areas of functioning including social functioning, communication skills, motor skills, and overall intellectual potential.
- Each child has their own learning style that includes specific learning challenges as well as areas of preserved skills and, at times, exceptional abilities.
- Both autism and Asperger's disorder are on the same continuum but are distinct in their expression.
What are the challenges students with PDD/ASD frequently experience?
- Academic difficulties that can often be misinterpreted as learning disabilities.
- Problems with executive functioning skills.
- Difficulty in forming relationships with peers.
- Emotional difficulties due to learning and social problems such as anxiety, depression, low self-esteem.
- Fear of new situations and trouble adjusting to changes.
- May look like or be misconstrued as attention-deficit-hyperactivity disorder (ADHD), Nonverbal Learning Disability (NLD), Oppositional-Defiant Disorder or Obsessive Compulsive Disorder (OCD).
Why choose US to help YOU?
- Our evaluations are conducted by neuropsychologists who have been extensively trained in the early detection of autistic spectrum disorders and in the identification of specific patterns of learning strengths and weaknesses that are often associated with this condition.
- Our evaluations help determine which teaching style is best suited to fit an individuals' specific learning profile; we also offer suggestions regarding compensatory educational approached.
- We work as a team with other learning professionals, advocates and health professionals to enhance the child's potential for success in all settings.
'The design of truly individual treatment plans that exploit strengths and compensate for weaknesses begins with a detailed understanding of how learning is different for children with autism than for those without autism and how learning is different among children with autism.'
— Bryna Siegel, Ph.D., author of Helping Children with Autism Learn
For more information on current research, interventions and programs, follow us on Facebook.
Coming to see you for an evaluation was so helpful and Im so happy that I did this. After struggling for years with ADHD but not knowing thats what it was and almost completely ruining our marriage because of it, your diagnosis helped more than you could know. Now I know that its not just me the diagnosis has turned our lives around and helped me feel more accomplished at work. Thanks again for everything.
Sandy and Bob M.
|
class VertexBufferCompound : public VertexBuffer
{
public:
void SetupSubBuffers(const std::vector<VertexBufferPtr>& subVertexBuffers);
void SetupFormat();
};
|
1854-89 THREE DOLLARS INDIAN HEAD
In 1853 the United States negotiated the "Gadsden Purchase"settlement of a boundary dispute with Mexico that resulted in the U.S. acquiring what would become the southern portions of Arizona and New Mexico for ten million dollars. The following year Commodore Matthew Perry embarked upon his famed expedition to re-open Japan to the Western world and establish trade. Spreading beyond its borders in many ways, a few years earlier the United States had joined the worldwide move to uniform postage rates and printed stamps when the Congressional Act of March 3, 1845 authorized the first U.S. postage stamps, and set the local prepaid letter rate at five cents. This set the stage for a close connection between postal and coinage history.
Exactly six years later, the postage rate was reduced to three cents when New York Senator Daniel S. Dickinson fathered legislation that simultaneously initiated coinage of the tiny silver three-cent piece as a public convenience. The large cents then in circulation were cumbersome and unpopular, and the new denomination was designed to facilitate the purchase of stamps without using the hated "coppers."
This reasoning was carried a step further when the Mint Act of February 21, 1853 authorized a three-dollar gold coin. Congress and Mint Director Robert Maskell Patterson were convinced that the new coin would speed purchases of three-cent stamps by the sheet and of the silver three-cent coins in roll quantities. Unfortunately, at no time during the 35-year span of this denomination did public demand justify these hopes. Chief Engraver James Barton Longacre chose an "Indian Princess" for his obverse not a Native American profile, but actually a profile modeled after the Greco-Roman Venus Accroupie statue then in a Philadelphia museum. Longacre used this distinctive sharp-nosed profile on his gold dollar of 1849 and would employ it again on the Indian Head cent of 1859. On the three-dollar coin Liberty is wearing a feathered headdress of equal-sized plumes with a band bearing LIBERTY in raised letters. She's surrounded by the inscription UNITED STATES OF AMERICA. Such a headdress dates back to the earliest known drawings of American Indians by French artist Jacques le Moyne du Morgue's sketches of the Florida Timucua tribe who lived near the tragic French colony of Fort Caroline in 1562. It was accepted by engravers and medalists of the day as the design shorthand for "America."
Longacre's reverse depicted a wreath of tobacco, wheat, corn and cotton with a plant at top bearing two conical seed masses. The original wax models of this wreath still exist on brass discs in a Midwestern collection and show how meticulous Longacre was in preparing his design. Encircled by the wreath is the denomination 3 DOLLARS and the date. There are two boldly different reverse types, the small DOLLARS appearing only in 1854 and the large DOLLARS on coins of 1855-89. Many dates show bold "outlining" of letters and devices, resembling a double strike but probably the result of excessive forcing of the design punches into the die steel, causing a hint of their sloping "shoulders" to appear as part of the coin's design. The high points of the obverse design that first show wear are the cheek and hair above the eye; on the reverse, check the bow knot and leaves.
A total of just over 535,000 pieces were issued along with 2058 proofs. The first coins struck were the 15 proofs of 1854. Regular coinage began on May 1, and that first year saw 138,618 pieces struck at Philadelphia (no mintmark), 1,120 at Dahlonega (D), and 24,000 at New Orleans (O). These two branch mints would strike coins only in 1854. San Francisco produced the three-dollar denomination in 1855, 1856, and 1857, again in 1860, and apparently one final piece in 1870. Mintmarks are found below the wreath.
Every U.S. denomination boasts a number of major rarities. The three-dollar gold coinage of 1854-1889 is studded with so many low-mintage dates that the entire series may fairly be called rare. In mint state 1878 is the most common date, followed by the 1879, 1888, 1854 and 1889 issues. Every other date is very rare in high grade, particularly 1858, 1865, 1873 Closed 3 and all the San Francisco issues. Minuscule mintages were the rule in the later years. Proof coins prior to 1859 are extremely rare and more difficult to find than the proof-only issues of 1873 Open 3, 1875 and 1876, but many dates are even rarer in the higher Mint State grades. This is because at least some proofs were saved by well- heeled collectors while few lower-budget collectors showed any interest in higher-grade business strikes of later-date gold. Counterfeits are known for many dates; any suspicious piece should be authenticated.
The rarest date of all is the unique 1870-S, of which only one example was struck for inclusion in the new Mint's cornerstone. Either the coin escaped, or a second was struck as a pocket piece for San Francisco Mint Coiner J.B. Harmstead. In any event, one coin showing traces of jewelry use surfaced in the numismatic market in 1907. It was sold to prominent collector William H. Woodin, and when Thomas L. Elder sold the Woodin collection in 1911, the coin went to Baltimore's Waldo C. Newcomer. Later owned by Virgil Brand, it was next sold by Ted and Carl Brandts of Ohio's Celina Coin Co. and Stack's of New York to Louis C. Eliasberg in 1946 for $11,500. In Bowers and Merena's October 1982 sale of the U.S. Gold Collection, this famous coin sold for a record $687,500.
The three-dollar denomination quietly expired in 1889 along with the gold dollar and nickel three-cent piece. America's coinage was certainly more prosaic without this odd denomination gold piece, but its future popularity with collectors would vastly outstrip the lukewarm public reception it enjoyed during its circulating life.
|
/*!
* \file config.cpp
* \brief Classes for different Optiontypes in rtsn
* \author S. Schotthoefer
*
* Disclaimer: This class structure was copied and modifed with open source permission from SU2 v7.0.3 https://su2code.github.io/
*/
#include "common/optionstructure.h"
// --- Members of OptionBase ----
OptionBase::~OptionBase() {}
std::vector<std::string> OptionBase::GetValue() { return _value; }
std::string OptionBase::SetValue( std::vector<std::string> value ) {
this->_value = value;
return "";
}
std::string OptionBase::OptionCheckMultipleValues( std::vector<std::string>& option_value, std::string type_id, std::string option_name ) {
if( option_value.size() != 1 ) {
std::string newString;
newString.append( option_name );
newString.append( ": multiple values for type " );
newString.append( type_id );
return newString;
}
return "";
}
std::string OptionBase::BadValue( std::vector<std::string>& option_value, std::string type_id, std::string option_name ) {
std::string newString;
newString.append( option_name );
newString.append( ": improper option value for type " );
newString.append( type_id );
newString.append( ". Value chosen: " );
for( unsigned i = 0; i < option_value.size(); i++ ) {
newString.append( option_value[i] );
}
return newString;
}
// ---- Memebers of OptionDouble
OptionDouble::OptionDouble( std::string option_field_name, double& option_field, double default_value ) : _field( option_field ) {
this->_def = default_value;
this->_name = option_field_name;
}
std::string OptionDouble::SetValue( std::vector<std::string> option_value ) {
OptionBase::SetValue( option_value );
// check if there is more than one value
std::string out = OptionCheckMultipleValues( option_value, "double", this->_name );
if( out.compare( "" ) != 0 ) {
return out;
}
std::istringstream is( option_value[0] );
double val;
if( is >> val ) {
this->_field = val;
return "";
}
return BadValue( option_value, "double", this->_name );
}
void OptionDouble::SetDefault() { this->_field = this->_def; }
// ---- Members of OptionString
OptionString::OptionString( std::string option_field_name, std::string& option_field, std::string default_value ) : _field( option_field ) {
this->_def = default_value;
this->_name = option_field_name;
}
std::string OptionString::SetValue( std::vector<std::string> option_value ) {
OptionBase::SetValue( option_value );
// check if there is more than one value
std::string out = OptionCheckMultipleValues( option_value, "double", this->_name );
if( out.compare( "" ) != 0 ) {
return out;
}
this->_field.assign( option_value[0] );
return "";
}
void OptionString::SetDefault() { this->_field = this->_def; }
// --- Members of OptionInt
OptionInt::OptionInt( std::string option_field_name, int& option_field, int default_value ) : _field( option_field ) {
this->_def = default_value;
this->_name = option_field_name;
}
std::string OptionInt::SetValue( std::vector<std::string> option_value ) {
OptionBase::SetValue( option_value );
std::string out = OptionCheckMultipleValues( option_value, "int", this->_name );
if( out.compare( "" ) != 0 ) {
return out;
}
std::istringstream is( option_value[0] );
int val;
if( is >> val ) {
this->_field = val;
return "";
}
return BadValue( option_value, "int", this->_name );
}
void OptionInt::SetDefault() { this->_field = this->_def; }
// ---- Members of OptionULong
OptionULong::OptionULong( std::string option_field_name, unsigned long& option_field, unsigned long default_value ) : _field( option_field ) {
this->_def = default_value;
this->_name = option_field_name;
}
std::string OptionULong::SetValue( std::vector<std::string> option_value ) {
OptionBase::SetValue( option_value );
std::string out = OptionCheckMultipleValues( option_value, "unsigned long", this->_name );
if( out.compare( "" ) != 0 ) {
return out;
}
std::istringstream is( option_value[0] );
unsigned long val;
if( is >> val ) {
this->_field = val;
return "";
}
return BadValue( option_value, "unsigned long", this->_name );
}
void OptionULong::SetDefault() { this->_field = this->_def; }
// ---- Members of OptionUShort
OptionUShort::OptionUShort( std::string option_field_name, unsigned short& option_field, unsigned short default_value ) : _field( option_field ) {
this->_def = default_value;
this->_name = option_field_name;
}
std::string OptionUShort::SetValue( std::vector<std::string> option_value ) {
OptionBase::SetValue( option_value );
std::string out = OptionCheckMultipleValues( option_value, "unsigned short", this->_name );
if( out.compare( "" ) != 0 ) {
return out;
}
std::istringstream is( option_value[0] );
unsigned short val;
if( is >> val ) {
this->_field = val;
return "";
}
return BadValue( option_value, "unsigned short", this->_name );
}
void OptionUShort::SetDefault() { this->_field = this->_def; }
// ---- Members of OptionLong
OptionLong::OptionLong( std::string option_field_name, long& option_field, long default_value ) : _field( option_field ) {
this->_def = default_value;
this->_name = option_field_name;
}
std::string OptionLong::SetValue( std::vector<std::string> option_value ) {
OptionBase::SetValue( option_value );
std::string out = OptionCheckMultipleValues( option_value, "long", this->_name );
if( out.compare( "" ) != 0 ) {
return out;
}
std::istringstream is( option_value[0] );
long val;
if( is >> val ) {
this->_field = val;
return "";
}
return BadValue( option_value, "long", this->_name );
}
void OptionLong::SetDefault() { this->_field = this->_def; }
// ---- Members of OptionBool
OptionBool::OptionBool( std::string option_field_name, bool& option_field, bool default_value ) : _field( option_field ) {
this->_def = default_value;
this->_name = option_field_name;
}
std::string OptionBool::SetValue( std::vector<std::string> option_value ) {
OptionBase::SetValue( option_value );
// check if there is more than one value
std::string out = OptionCheckMultipleValues( option_value, "bool", this->_name );
if( out.compare( "" ) != 0 ) {
return out;
}
if( option_value[0].compare( "YES" ) == 0 ) {
this->_field = true;
return "";
}
if( option_value[0].compare( "NO" ) == 0 ) {
this->_field = false;
return "";
}
return BadValue( option_value, "bool", this->_name );
}
void OptionBool::SetDefault() { this->_field = this->_def; }
// --- members of OptionStringList
OptionStringList::OptionStringList( std::string option_field_name, unsigned short& list_size, std::vector<std::string>& option_field )
: _field( option_field ), _size( list_size ) {
this->_name = option_field_name;
}
std::string OptionStringList::SetValue( std::vector<std::string> option_value ) {
OptionBase::SetValue( option_value );
// The size is the length of option_value
unsigned short option_size = option_value.size();
if( option_size == 1 && option_value[0].compare( "NONE" ) == 0 ) {
this->_size = 0;
return "";
}
this->_size = option_size;
// Parse all of the options
this->_field.resize( this->_size );
for( unsigned long i = 0; i < option_size; i++ ) {
this->_field.at( i ) = option_value[i];
}
return "";
}
void OptionStringList::SetDefault() {
this->_size = 0; // There is no default value for list
}
|
Monday, March 1, 2010
Essay on Hills like White Elephants
Hills like White Elephants Essay
Ernest Hemingway’s short story “Hills like White Elephants” is mainly told through the dialogue of two protagonists at a railway station in rural Spain. The labels on the luggage they carry are an indication of their nomadic life, and their conversations reveal their struggling romantic relationship. The girl, Jig, laments that their mundane lifestyle consists of nothing but “look at things and try new drinks” The lack of mentioning of the girl’s relationship to the American suggests that their relationship is not particularly serious or meaningful.
The calm, and simple setting as well as the lack of colorful imagery on their side of Ebro hills reflects their life, but contrasts the escalation of tension in their conversation. As they drink beer, Jig comments that the distant white hills against the “brown and dry” country “look like white elephants.” The American’s careless response to her observation and her disappointed reaction establishes the story’s pivotal issue.
Our Service Can Write a Custom Essay on Hills like White Elephants for You!
As tension between the couple continues to multiply through various attempts at small talk and the ordering of more drinks, the problem in the relationship emerges as an operation the American wants Jig to have. When he encourages Jig to have the “awfully simple” operation, and tells her that “They just let the air in and then it’s perfectly natural,” it becomes apparent that the operation is an abortion. The title of the short story insinuates pregnancy since the dictionary definitions of “a white elephant” defines it as a property requiring much care and expense and yielding little profit; an object no longer of value to its owner but of value to others, and something of little or no value. The child inside Jig will require unconditional care, love, and various expenses once it is born. Even though the pregnancy is invaluable to Jig, it means nothing but problems to her lover. The imagery of the hills and the curves of the Ebro characterize the shape of a pregnant woman, and symbolize that the pregnancy is a large hill in their relationship. The American likes traveling to places with Jig and enjoys their lifestyle since their relationship is only based on what they have now and not what they can have. He is unwilling to take the chance to give up or change anything in order to make their present life better because he is afraid to lose his possessions once he assumes his role as a father. He promises Jig his support for her after the operation, and assures her that life will go on as before as if nothing has happened.
Unfortunately, he fails to understand that the pregnancy has occurred and everything will be different no matter what Jig decides.
Jig is aware that the real concern is one of going past the point when care and respect can endure difficulties in a relationship. This seems to be the first critical issue they've had to face, to measure the depth of their relationship. Their weak relationship will eventually fall apart because the relationship lacks substance and understanding. If they cannot decide and agree on simple matters such as whether or not to have water in the drink, they cannot possibly agree on the important issues in their relationship such as having a baby.
The climax of the story appears when Jig is agitated by their irritating conversation and their romantic relationship. She begins to question about their uncertain future and his true feelings for her. She seems persuaded by the American when she comments on her willingness to do the operation despite her wants and needs because “she doesn’t care” about herself. At the same time, Jig begins to realize that life may not turn out the way she had planned. She remarks that the Anis del Toro “tastes like licorice,“ and that “everything tastes of licorice“, “especially all the things you’ve waited so long for.” She likes to try new things, like the drink, but is often disappointed in the end. She indicates that it is too late for him to make things better. The American believes that Jig is being reasonable for not wanting to having the “simple” operation done so they can “be all right and be happy“ again. He informs her that he has “known lots of people that have done it” in order to convince her to have the “awfully simple” operation. He says that the pregnancy is “the only thing that bothers us. It's the only thing that's made us unhappy.” He sees the whole issue as “simple” because he does not understand the real problem that is causing the misery. When he finally leaves Jig to get their bags for the train, he observes that the other people are “waiting reasonably for their train” because in his mind, Jig is the one to blame their troubles because she is “unreasonably waiting” for a future that he cannot imagine having with her. Ironically, he is unreasonable one because he is the one causing the problems by wanting the abortion. Jig realizes that their withering relationship is not the result of her pregnancy but the result of their failure to understand each other. She realizes that they are incompatible as a couple to have a family together. Even if she does have the abortion, she can no longer stay with him because he can never give her what she longs for. The story ends with an assurance from Jig that she is “fine,” as if she has made up her mind on the abortion issue.
Hemingway leaves the reader wondering about their final destination. He chooses the setting in the valley of the Ebro to symbolize the couple’s situation and options in life. They are on the sunless and barren side of the mountain where they can only see hills that looks like white elephants. At the end of the story, the American remarks “I'd better take the bags over to the other side of the station,” the side where there is growth and life. The train is representative of two different directions if life, however is unclear whether this signifies that the man has changed his mind about the abortion, or that Jig has decided to go through with the operation and leave him so they have to live separate lives. Jig has desires to change and to live a different life because she is aware of it. She is ready and willing to experience a different life while her lover is not. Hemingway strategically calls the man just “American,” and gives the girl a simple name to show the commonality of their problem reflected in society. He conveys that elements such as understanding, communication, honesty, and maturity, are essential to every healthy relationship. The lack of those elements leads to inevitable disintegration of a relationship.
Get Custom Essay on Hills like White Elephants
No comments:
Post a Comment
|
Cadence:High vs Low
Cadence is the number of revolutions the crank is turning per minute. It is equivalent to a car tachometer. Simply put, it is the amount of revolution that a cyclist is able to turn his crank for a sustained period of time. While there are many articles written on what is a good or bad cadence for riding and how to maintain a high cadence, I personally believe that there is no special number that 1 must reach. In the Lance Armstrong days,it was emphasized and drilled to beginners that one must keep a cadence of above 90. Lance Armstrong was famous for keeping a very high cadence (>100) for his tour periods.
Yet, personally I tried keeping a high cadence but I found it very exhausting, my lungs were crying out and it made me exhausted pretty fast.So I switched to lower cadence(70-85rpm) and found it easier to manage. Going faster was easier on the lungs and though not as easy as on the legs. But my legs are somehow able to cope with the pain more than I could cope with the pains from my lungs!
Different people have different anaerobic and aerobic capacities, while I do not have any advice for keeping a specific number, to improve however, requires one to be able to keep a good tempo. This can be done by having cleats as well as having a good cycling position.While I prefer slightly slower cadence for cruising on the flats,I like having higher cadence on the hills as lower cadence is harder for me to maintain while climbing up.
About sadisticnoob
A poor guy attempting to skim on bike parts
This entry was posted in Help Guide, Uncategorized and tagged , , . Bookmark the permalink.
Leave a Reply
You are commenting using your account. Log Out / Change )
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
|
A bachelor's degree earned in the United States usually takes a minimum of four years. Degrees are earned by taking a combination of required courses which meet liberal arts distribution requirements (humanities, social sciences, and natural sciences), a required number of courses in the major field of study (called a "major"in the United States), and the balance of courses as electives. Each course taken is assigned a value called "points" (also called "credits" or "units"). To earn a bachelor's degree, one must earn a minimum of 124 points (usually more) and, at the same time, meet the distribution and major requirements. Advisement on course selection is available in each school at the time of registration.
Undergraduate studies are offered in three divisions of the University. Each division offers a distinct program of studies. The divisions that offer the undergraduate (B.A. or B.S.) degree are:
College (CC) - Study of the liberal arts and sciences leading to
the B.A. degree traditionally, but not exclusively, for students just out
of secondary school, usually between the ages of 17 and 22. With an enrollment
of approximately 4,000, Columbia College is the smallest college in the
2. Fu Foundation School of Engineering and Applied Science (SEAS) - Offers the B.S. degree in engineering and applied science fields.
3. School of General Studies (GS) - Offers the B.A. or B.S. degree to students who have had a break of a year or more in their education since high school.
All other divisions of the University are graduate schools and, as such, offer degrees beyond the bachelor's degree, usually master's or doctoral degrees.
Master's degrees take from one to three years to earn depending on the course work and research/writing requirements.
Ph.D. requirements may vary but usually require one to two years of course work beyond the master's degree, comprehensive examinations, a major original contribution to research in the field of study, and oral defense of one's research. The research and writing requirement can take from two to five years beyond the course work. A Ph.D. candidate should plan on a minimum of four and an average of six years of study to complete the degree requirements. It is not unusual for a student to be at Columbia seven or eight years to complete a doctorate.
To be admitted to these schools, the applicant must have completed a degree considered in the United States to be equivalent to a U.S. bachelor's degree with a strong academic record. There are also other admissions requirements. The Columbia schools and programs that offer graduate degrees are:
|College of Physicians and Surgeons||Program in Physical Therapy|
|Fu Foundation School of Engineering and Applied Science||School of Continuing Education|
|Graduate School of Architecture, Planning, and Preservation||College of Dental Medicine|
|Graduate School of Arts and Sciences||School of International and Public Affairs|
|Graduate School of Business||School of Law|
|Graduate School of Journalism||School of Nursing|
|Institute of Human Nutrition||School of Social Work|
|Mailman School of Public Health||School of the Arts|
|Program in Occupational Therapy|
The School of Continuing Education offers, in addition to its masters' degree program, opportunities for non-degree study at Columbia to qualified applicants. Both undergraduate and graduate level courses from the University's arts and sciences course offerings may be taken for credit. This option is particularly appropriate for international students who wish to visit at Columbia for a semester or two.
To qualify for an I-20 to be in F-1 student status in the U.S., students must register for at least 12 points of credit-bearing coursework each semester. This is usually equivalent to four courses each semester.
Of particular interest to international students is the School of Continuing Education's American Language Program. The ALP is one of the oldest English as a Second Language programs in the United States and offers a carefully integrated sequence of courses to students, business and professional people, and international visitors who wish to improve their command of English.
There are two institutions which are affiliated with, located adjacent
to, and bear the name of Columbia University. They are administered separately
and applications for admission must be made directly to these institutions.
Barnard College, Columbia University An undergraduate liberal arts college for women affiliated with Columbia University. For more information, write to Barnard College Admissions, 111 Milbank,3009 Broadway, New York, NY 10027, U.S.A.
Teachers College, Columbia University A graduate school for students interested in education, offering only the master's and doctoral degrees. For more information, write to Teachers College Admissions, 146 Horace Mann, 551 West 120 Street, New York, NY 10027, U.S.A.
|
In the American electoral system, a primary election is an election that determines the nominee for each political party, who then competes for the office in the general election. A presidential primary is a state election that picks the delegates committed to nominate particular candidates for president of the United States. A presidential caucus, as in Iowa, requires voters to meet together for several hours in face-to-face meetings that select county delegates, who eventually pick the delegates to the national convention. No other country uses primaries; they choose their candidates in party conventions.
Primaries were introduced in the Progressive Era in the early 20th century to weaken the power of bosses and make the system more democratic. In presidential elections, they became important starting in 1952, when the first-in-the-nation New Hampshire Primary helped give Dwight D. Eisenhower the Republican nomination, and knocked Harry S. Truman out of the Democratic race because of his poor showing. In 1964, Lyndon B. Johnson ended his reelection campaign after doing poorly in New Hampshire.
After 1968, both parties changed their rules to emphasize presidential primaries, although some states still use the caucus system.
In recent decades, New Hampshire holds the first primary a few days after Iowa holds the first caucus. That gives these two states enormous leverage, as the candidates and the media focus there. New Hampshire and Iowa receive about half of all the media attention given all primaries.
The primary allows voters to choose between different candidates of the some political parties, perhaps representing different wings of the party. For example, a Republican primary may choose between a range of candidates from moderate to conservative. Gallup's 2008 polling data indicated a trend in primary elections towards more conservative candidates, despite the more liberal result in the general election.
In recent years the primary seasons has come earlier and earlier, as states move up to earlier dates in the hope it will give them more leverage. For example, Barry Goldwater won the 1964 nomination because he won the last primary in California. The logic is faulty--in highly contested races the later primaries have more leverage. Thus in 2008 California gave up its traditional last-in-the-nation role and joined 20 other states on Super Tuesday. Neither the candidates not the voters paid it much attention. Michigan and Florida moved up their primaries in defiance of national Democratic Party rules and were penalized. The result is the primary season is extended, and is far more expensive, and no state gets an advantage--except for Iowa and New Hampshire, which now have dates in early January.
In late 2009 the two national parties are meeting to find a common solution.
- Duncan, Dayton. Grass roots: one year in the life of the New Hampshire presidential primary (1991) 436 pages; on 1988 campaign
- Johnson, Haynes, and Dan Balz. The Battle for America 2008: The Story of an Extraordinary Election (2009), excellent history of 2008 primaries
- Kamarck, Elaine C. Primary Politics: How Presidential Candidates Have Shaped the Modern Nominating System (2009) excerpt and text search
|
module;
#include <vector>
#include <stdexcept>
#include <source_location>
#include <string>
#include <Windows.h>
#include <dpapi.h> // not including this header causes symbol has already been defined error
#include <Wincrypt.h>
module boring32.crypto:functions;
import boring32.error;
namespace Boring32::Crypto
{
// See also a complete example on MSDN at:
// https://docs.microsoft.com/en-us/windows/win32/seccrypto/example-c-program-using-cryptprotectdata
std::vector<std::byte> Encrypt(
const std::vector<std::byte>& data,
const std::wstring& password,
const std::wstring& description
)
{
DATA_BLOB dataIn;
dataIn.pbData = reinterpret_cast<BYTE*>(const_cast<std::byte*>(&data[0]));
dataIn.cbData = static_cast<DWORD>(data.size());
DATA_BLOB additionalEntropy{ 0 };
if (password.empty() == false)
{
additionalEntropy.pbData = reinterpret_cast<BYTE*>(const_cast<wchar_t*>(&password[0]));
additionalEntropy.cbData = static_cast<DWORD>(password.size()*sizeof(wchar_t));
}
// https://docs.microsoft.com/en-us/windows/win32/api/dpapi/nf-dpapi-cryptprotectdata
DATA_BLOB encryptedBlob{ 0 };
const wchar_t* descriptionCStr = description.empty()
? nullptr
: description.c_str();
DATA_BLOB* const entropy = password.empty()
? nullptr
: &additionalEntropy;
const bool succeeded = CryptProtectData(
&dataIn, // The data to encrypt.
descriptionCStr, // An optional description string.
entropy, // Optional additional entropy to
// to encrypt the string with, e.g.
// a password.
nullptr, // Reserved.
nullptr, // Pass a PromptStruct.
0, // Flags.
&encryptedBlob // Receives the encrypted information.
);
if (succeeded == false)
throw Error::Win32Error("CryptProtectData() failed", GetLastError());
// Should we really return std::byte instead of Windows' BYTE?
// Using std::byte means we'll need to cast at the API call.
std::vector<std::byte> returnValue(
(std::byte*)encryptedBlob.pbData,
(std::byte*)encryptedBlob.pbData + encryptedBlob.cbData
);
if (encryptedBlob.pbData)
LocalFree(encryptedBlob.pbData);
return returnValue;
}
std::vector<std::byte> Encrypt(
const std::wstring& str,
const std::wstring& password,
const std::wstring& description
)
{
const std::byte* buffer = (std::byte*)&str[0];
return Encrypt(
std::vector<std::byte>(buffer, buffer + str.size() * sizeof(wchar_t)),
password,
description
);
}
std::wstring DecryptString(
const std::vector<std::byte>& encryptedData,
const std::wstring& password,
std::wstring& outDescription
)
{
DATA_BLOB encryptedBlob;
encryptedBlob.pbData = (BYTE*)&encryptedData[0];
encryptedBlob.cbData = (DWORD)encryptedData.size();
DATA_BLOB additionalEntropy{ 0 };
if (password.empty() == false)
{
additionalEntropy.pbData = (BYTE*)&password[0];
additionalEntropy.cbData = (DWORD)password.size() * sizeof(wchar_t);
}
DATA_BLOB decryptedBlob;
LPWSTR descrOut = nullptr;
DATA_BLOB* const entropy = password.empty()
? nullptr
: &additionalEntropy;
// https://docs.microsoft.com/en-us/windows/win32/api/dpapi/nf-dpapi-cryptunprotectdata
const bool succeeded = CryptUnprotectData(
&encryptedBlob, // the encrypted data
&descrOut, // Optional description
entropy, // Optional additional entropy
// used to encrypt the string
// with, e.g. a password
nullptr, // Reserved
nullptr, // Optional prompt structure
0, // Flags
&decryptedBlob // Receives the decrypted data
);
if (succeeded == false)
throw Error::Win32Error("CryptUnprotectData() failed", GetLastError());
if (descrOut)
{
outDescription = descrOut;
LocalFree(descrOut);
}
std::wstring returnValue(
reinterpret_cast<wchar_t*>(decryptedBlob.pbData),
decryptedBlob.cbData / sizeof(wchar_t)
);
if (decryptedBlob.pbData)
LocalFree(decryptedBlob.pbData);
return returnValue;
}
std::vector<std::byte> Encrypt(
const DWORD blockByteLength,
const CryptoKey& key,
const std::vector<std::byte>& iv,
const std::vector<std::byte>& plainText,
const DWORD flags
)
{
if (key.GetHandle() == nullptr)
throw std::invalid_argument(__FUNCSIG__ ": key is null");
// IV is optional
PUCHAR pIV = nullptr;
ULONG ivSize = 0;
if (iv.empty() == false)
{
if (iv.size() != blockByteLength)
throw std::invalid_argument(__FUNCSIG__ ": IV must be the same size as the AES block lenth");
pIV = (PUCHAR)&iv[0];
ivSize = (ULONG)iv.size();
}
// Determine the byte size of the encrypted data
DWORD cbData = 0;
// https://docs.microsoft.com/en-us/windows/win32/api/bcrypt/nf-bcrypt-bcryptencrypt
NTSTATUS status = BCryptEncrypt(
key.GetHandle(),
(PUCHAR)&plainText[0],
(ULONG)plainText.size(),
nullptr,
pIV,
ivSize,
nullptr,
0,
&cbData,
flags
);
if (BCRYPT_SUCCESS(status) == false)
throw Error::NtStatusError("BCryptEncrypt() failed to count bytes", status);
// Actually do the encryption
std::vector<std::byte> cypherText(cbData, std::byte{ 0 });
status = BCryptEncrypt(
key.GetHandle(),
(PUCHAR)&plainText[0],
(ULONG)plainText.size(),
nullptr,
pIV,
ivSize,
(PUCHAR)&cypherText[0],
(ULONG)cypherText.size(),
&cbData,
flags
);
if (BCRYPT_SUCCESS(status) == false)
throw Error::NtStatusError("BCryptEncrypt() failed to encrypt", status);
return cypherText;
}
std::vector<std::byte> Decrypt(
const DWORD blockByteLength,
const CryptoKey& key,
const std::vector<std::byte>& iv,
const std::vector<std::byte>& cypherText,
const DWORD flags
)
{
if (key.GetHandle() == nullptr)
throw std::invalid_argument(__FUNCSIG__ ": key is null");
// IV is optional
PUCHAR pIV = nullptr;
ULONG ivSize = 0;
if (iv.empty() == false)
{
// Do all cipher algs require this?
if (iv.size() != blockByteLength)
throw std::invalid_argument(__FUNCSIG__ ": IV must be the same size as the AES block lenth");
pIV = (PUCHAR)&iv[0];
ivSize = (ULONG)iv.size();
}
// Determine the byte size of the decrypted data
DWORD cbData = 0;
// https://docs.microsoft.com/en-us/windows/win32/api/bcrypt/nf-bcrypt-bcryptdecrypt
NTSTATUS status = BCryptDecrypt(
key.GetHandle(),
(PUCHAR)&cypherText[0],
(ULONG)cypherText.size(),
nullptr,
pIV,
ivSize,
nullptr,
0,
&cbData,
flags
);
if (BCRYPT_SUCCESS(status) == false)
throw Error::NtStatusError("BCryptDecrypt() failed to count bytes", status);
// Actually do the decryption
std::vector<std::byte> plainText(cbData, std::byte{ 0 });
status = BCryptDecrypt(
key.GetHandle(),
(PUCHAR)&cypherText[0],
(ULONG)cypherText.size(),
nullptr,
pIV,
ivSize,
(PUCHAR)&plainText[0],
(ULONG)plainText.size(),
&cbData,
flags
);
if (BCRYPT_SUCCESS(status) == false)
throw Error::NtStatusError("BCryptDecrypt() failed to decrypt", status);
plainText.resize(cbData);
return plainText;
}
std::string ToBase64String(const std::vector<std::byte>& bytes)
{
// Determine the required size -- this includes the null terminator
DWORD size = 0;
bool succeeded = CryptBinaryToStringA(
(BYTE*)&bytes[0],
(DWORD)bytes.size(),
CRYPT_STRING_BASE64 | CRYPT_STRING_NOCRLF,
nullptr,
&size
);
if (succeeded == false)
throw Error::Win32Error("CryptBinaryToStringA() failed when calculating size");
if (size == 0)
return "";
std::string returnVal(size, L'\0');
succeeded = CryptBinaryToStringA(
(BYTE*)&bytes[0],
(DWORD)bytes.size(),
CRYPT_STRING_BASE64 | CRYPT_STRING_NOCRLF,
(LPSTR)&returnVal[0],
&size
);
if (succeeded == false)
throw Error::Win32Error("CryptBinaryToStringA() failed when encoding");
// Remove terminating null character
if (returnVal.empty() == false)
returnVal.pop_back();
return returnVal;
}
std::wstring ToBase64WString(const std::vector<std::byte>& bytes)
{
// Determine the required size -- this includes the null terminator
DWORD size = 0;
// https://docs.microsoft.com/en-us/windows/win32/api/wincrypt/nf-wincrypt-cryptbinarytostringw
bool succeeded = CryptBinaryToStringW(
(BYTE*)&bytes[0],
(DWORD)bytes.size(),
CRYPT_STRING_BASE64 | CRYPT_STRING_NOCRLF,
nullptr,
&size
);
if (succeeded == false)
throw Error::Win32Error("CryptBinaryToStringW() failed when calculating size");
if (size == 0)
return L"";
std::wstring returnVal(size, L'\0');
succeeded = CryptBinaryToStringW(
(BYTE*)&bytes[0],
(DWORD)bytes.size(),
CRYPT_STRING_BASE64 | CRYPT_STRING_NOCRLF,
(LPWSTR)&returnVal[0],
&size
);
if (succeeded == false)
throw Error::Win32Error("CryptBinaryToStringW() failed when encoding");
// Remove terminating null character
if (returnVal.empty() == false)
returnVal.pop_back();
return returnVal;
}
std::vector<std::byte> ToBinary(const std::wstring& base64)
{
DWORD byteSize = 0;
bool succeeded = CryptStringToBinaryW(
&base64[0],
0,
CRYPT_STRING_BASE64,
nullptr,
&byteSize,
nullptr,
nullptr
);
if (succeeded == false)
throw Error::Win32Error("CryptStringToBinaryW() failed when calculating size");
std::vector<std::byte> returnVal(byteSize);
succeeded = CryptStringToBinaryW(
&base64[0],
0,
CRYPT_STRING_BASE64,
(BYTE*)&returnVal[0],
&byteSize,
nullptr,
nullptr
);
if (succeeded == false)
throw Error::Win32Error("CryptStringToBinaryW() failed when decoding");
returnVal.resize(byteSize);
return returnVal;
}
std::vector<std::byte> EncodeAsnString(const std::wstring& name)
{
DWORD encoded = 0;
// CERT_NAME_STR_FORCE_UTF8_DIR_STR_FLAG is required or the encoding
// produces subtle differences in the encoded bytes (DC3 vs FF in
// original buffer), which causes the match to fail
// See https://docs.microsoft.com/en-us/windows/win32/api/wincrypt/nf-wincrypt-certstrtonamew
const DWORD flags = CERT_X500_NAME_STR | CERT_NAME_STR_FORCE_UTF8_DIR_STR_FLAG;
bool succeeded = CertStrToNameW(
X509_ASN_ENCODING,
name.c_str(),
flags,
nullptr,
nullptr,
&encoded,
nullptr
);
if (succeeded == false)
throw Error::Win32Error("CertStrToNameW() failed", GetLastError());
std::vector<std::byte> byte(encoded);
succeeded = CertStrToNameW(
X509_ASN_ENCODING,
name.c_str(),
flags,
nullptr,
(BYTE*)&byte[0],
&encoded,
nullptr
);
if (succeeded == false)
throw Error::Win32Error("CertStrToNameW() failed", GetLastError());
byte.resize(encoded);
return byte;
}
std::wstring FormatAsnNameBlob(
const CERT_NAME_BLOB& certName,
const DWORD format
)
{
// https://docs.microsoft.com/en-us/windows/win32/api/wincrypt/nf-wincrypt-certnametostrw
DWORD characterSize = CertNameToStrW(
X509_ASN_ENCODING,
(CERT_NAME_BLOB*)&certName,
format,
nullptr,
0
);
if (characterSize == 0)
return L"";
std::wstring name(characterSize, '\0');
characterSize = CertNameToStrW(
X509_ASN_ENCODING,
(CERT_NAME_BLOB*)&certName,
format,
&name[0],
(DWORD)name.size()
);
name.pop_back(); // remove excess null character
return name;
}
}
|
2 Products Found
|Results per page: 24 48 72||
In terms of ecologically friendly flooring, bamboo is one of the top contenders. Not only is bamboo flooring made from totally renewable resources, but it also is available in a wide variety of design options. For those who desire the look of hardwood flooring but are concerned about the environmental consequences of harvesting trees, bamboo offers the perfect solution. While bamboo is not technically wood flooring, its appearance is close enough to fool even the most discerning eye.
Why is Bamboo Flooring Considered Environmentally Friendly?
Although bamboo is actually a type of grass, it is harder than red oak. It reaches full maturity in just 3 to 5 years rather than several decades and re-growth appears naturally without the need for replanting. Harvesting bamboo is actually somewhat required because it is so hardy that leaving it to its own devices would put a strain on the environment. It would be a terrible shame to waste the harvested material, so people have designed many ways to put it to good use from thatched roofs to flooring material.
Bamboo also has no requirements for irrigation, fertilizers, or pesticides when grown in its natural environment. Bamboo is naturally resistant to insects and pests. The lack of need for harsh chemicals during its growth only does more to keep the carbon footprint down.
How is Bamboo Flooring Manufactured?
There are several steps involved in creating a material suitable for flooring from bamboo. Upon harvest, the bamboo is boiled to remove its natural starches and moisture which could become a wonderful environment for termites if not remedied. The outer skin is then removed and the stalk is cut into strips for flooring. These strips are then boiled again to make them even harder or carbonized; the longer the carbonation process, the darker the color of the final product.
When the strips are ready they are formed into flooring either by gluing strips together or gluing a single layer of bamboo strips on top of another solid surface, resulting in either solid bamboo flooring or engineered bamboo flooring respectively. The flooring also goes through other processes to strengthen it further by applying laminate materials to increase scratch resistance.
What are the design options with bamboo flooring?
- Bamboo flooring is available in widths ranging from 3 ž inches to 7 inches and thicknesses of 5/8 inches and 9/16 inches.
- Finish options for bamboo flooring are available from unfinished and natural from the FSC Unfinished Bamboo collection to nearly black and a choice of either horizontal or vertical graining as found in the FSC Designer collection.
- The two edge types of bamboo flooring are micro-beveled edges and square edges. Bamboo is also available in floating floor styles and nail or glue down styles.
How Durable is Bamboo Flooring?
Bamboo is naturally hard and durable, and the process it goes through during the manufacture of flooring only increases this strength. While one should avoid sliding furniture across the floor or allowing water to stand, bamboo flooring will do well in most any low moisture room. Bamboo flooring from the EcoBamboo Collection to the FSC Prestige Collection will be an investment in beauty and durability that is sure to add value to any home with minimal environmental impact.
|
By Karen Kaplan
5:53 PM EST, January 30, 2013
Attention dieters: Many of the “facts” you think you know about obesity and weight loss are wrong.
So says a report published in Thursday’s edition of the New England Journal of Medicine. An international team of dietitians, doctors and other experts examined more than a dozen ideas about obesity that are widely believed to be true but aren’t actually supported by reliable medical evidence. It’s not just dieters who buy into these mistaken notions, the study authors note – much of this incorrect conventional wisdom is espoused by physicians, academic scientists, government agencies and (gulp) the media.
Seven of these errant ideas were classified as “myths,” meaning they are “beliefs held to be true despite substantial refuting evidence.” Another six were categorized as “presumptions,” or “beliefs held to be true for which convincing evidence does not yet confirm or disprove their truth.”
Without further ado, let’s get to the myth-busting:
Eating a little less or exercising a little more will lead to large weight loss over time, as long as those behaviors are sustained. This myth is based on the idea that 3,500 calories are equal to one pound. That equation was based on short-term experiments. In the long-term, the body compensates in various ways that slow down weight loss. For instance, the equation predicts that a person who burns 100 extra calories per day will lose more than 50 pounds over five years; in reality, that exercise regimen will cause a person to shed only about 10 pounds (assuming calorie intake remains the same).
It’s important to set realistic weight-loss goals so dieters don’t get frustrated. Studies that have examined this reasonable-sounding assumption have found that having realistic goals has no impact on the amount of weight lost. Indeed, some studies have found that those who set the most ambitious goals lost the most weight, even if they fell short of their initial expectations.
Slow, gradual weight loss is easier to sustain than large, rapid weight loss. In fact, clinical trials have found that people who jump-start their diets by dropping a lot of weight in the beginning (by consuming only 800 to 1,200 calories per day, for instance) had the best results in long-term studies.
In order to help someone lose weight, you must gauge their readiness to stick to a diet. Experimental evidence shows that readiness isn’t related to diet results.
School P.E. classes help reduce and prevent childhood obesity. While there is certainly some amount of physical education that would help fight childhood obesity, P.E. classes in their current form have not been shown to reduce BMI or obesity in kids on a consistent basis.
Babies who are breast-fed are less likely to become obese. If you think this is true, you’re in good company – the World Health Organization presented this “fact” in one of its reports. But a randomized, controlled clinical trial that followed 13,000 children for more than six years found “no compelling evidence” that breastfeeding staves off obesity, according to the New England Journal of Medicine report. (The authors did note that breastfeeding has other benefits and should be encouraged anyway.)
You can burn 100 to 300 calories by having sex. In fact, having sex burns calories at about the same rate as walking at a pace of 2.5 mph. “Given that the average bout of sexual activity lasts about 6 minutes,” the authors write, a man in his early to mid-30s might burn 21 calories. But wait, it gets worse: Considering that this man could burn 7 calories just watching TV, the true benefit of having sex is only 14 additional calories burned.
The report also says that these widely accepted ideas are just as likely to be false as true:
* Eating breakfast instead of skipping it will help prevent obesity.
* Long-term eating and exercise habits are set in early childhood.
* Regardless of what else you do, eating more fruits and vegetables will lead to weight loss (or less weight gain).
* Yo-yo dieting will take months or years off your life.
* Snacking will make you gain weight.
* The availability of sidewalks, parks and other aspects of the “built environment” influence the prevalence of obesity.
How’s that for a reality check? You can read the full article for yourself here.
Return to the Booster Shots blog.
Follow me on Twitter @LATkarenkaplan
Copyright © 2013, Los Angeles Times
|
// Copyright (c) 2012 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "content/worker/websharedworkerclient_proxy.h"
#include "base/bind.h"
#include "base/command_line.h"
#include "base/message_loop.h"
#include "content/common/fileapi/file_system_dispatcher.h"
#include "content/common/fileapi/webfilesystem_callback_dispatcher.h"
#include "content/common/quota_dispatcher.h"
#include "content/common/webmessageportchannel_impl.h"
#include "content/common/worker_messages.h"
#include "content/public/common/content_switches.h"
#include "content/worker/shared_worker_devtools_agent.h"
#include "content/worker/websharedworker_stub.h"
#include "content/worker/worker_thread.h"
#include "content/worker/worker_webapplicationcachehost_impl.h"
#include "ipc/ipc_logging.h"
#include "third_party/WebKit/Source/Platform/chromium/public/WebString.h"
#include "third_party/WebKit/Source/Platform/chromium/public/WebURL.h"
#include "third_party/WebKit/Source/WebKit/chromium/public/WebDocument.h"
#include "third_party/WebKit/Source/WebKit/chromium/public/WebFileSystemCallbacks.h"
#include "third_party/WebKit/Source/WebKit/chromium/public/WebFrame.h"
#include "third_party/WebKit/Source/WebKit/chromium/public/WebSecurityOrigin.h"
using WebKit::WebApplicationCacheHost;
using WebKit::WebFrame;
using WebKit::WebMessagePortChannel;
using WebKit::WebMessagePortChannelArray;
using WebKit::WebSecurityOrigin;
using WebKit::WebString;
using WebKit::WebWorker;
using WebKit::WebSharedWorkerClient;
namespace content {
// How long to wait for worker to finish after it's been told to terminate.
#define kMaxTimeForRunawayWorkerSeconds 3
WebSharedWorkerClientProxy::WebSharedWorkerClientProxy(
int route_id, WebSharedWorkerStub* stub)
: route_id_(route_id),
appcache_host_id_(0),
stub_(stub),
ALLOW_THIS_IN_INITIALIZER_LIST(weak_factory_(this)),
devtools_agent_(NULL) {
}
WebSharedWorkerClientProxy::~WebSharedWorkerClientProxy() {
}
void WebSharedWorkerClientProxy::postMessageToWorkerObject(
const WebString& message,
const WebMessagePortChannelArray& channels) {
std::vector<int> message_port_ids(channels.size());
std::vector<int> routing_ids(channels.size());
for (size_t i = 0; i < channels.size(); ++i) {
WebMessagePortChannelImpl* webchannel =
static_cast<WebMessagePortChannelImpl*>(channels[i]);
message_port_ids[i] = webchannel->message_port_id();
webchannel->QueueMessages();
DCHECK(message_port_ids[i] != MSG_ROUTING_NONE);
routing_ids[i] = MSG_ROUTING_NONE;
}
Send(new WorkerMsg_PostMessage(
route_id_, message, message_port_ids, routing_ids));
}
void WebSharedWorkerClientProxy::postExceptionToWorkerObject(
const WebString& error_message,
int line_number,
const WebString& source_url) {
Send(new WorkerHostMsg_PostExceptionToWorkerObject(
route_id_, error_message, line_number, source_url));
}
void WebSharedWorkerClientProxy::postConsoleMessageToWorkerObject(
int source,
int type,
int level,
const WebString& message,
int line_number,
const WebString& source_url) {
WorkerHostMsg_PostConsoleMessageToWorkerObject_Params params;
params.source_identifier = source;
params.message_type = type;
params.message_level = level;
params.message = message;
params.line_number = line_number;
params.source_url = source_url;
Send(new WorkerHostMsg_PostConsoleMessageToWorkerObject(route_id_, params));
}
void WebSharedWorkerClientProxy::confirmMessageFromWorkerObject(
bool has_pending_activity) {
Send(new WorkerHostMsg_ConfirmMessageFromWorkerObject(
route_id_, has_pending_activity));
}
void WebSharedWorkerClientProxy::reportPendingActivity(
bool has_pending_activity) {
Send(new WorkerHostMsg_ReportPendingActivity(
route_id_, has_pending_activity));
}
void WebSharedWorkerClientProxy::workerContextClosed() {
Send(new WorkerHostMsg_WorkerContextClosed(route_id_));
}
void WebSharedWorkerClientProxy::workerContextDestroyed() {
Send(new WorkerHostMsg_WorkerContextDestroyed(route_id_));
// Tell the stub that the worker has shutdown - frees this object.
if (stub_)
stub_->Shutdown();
}
WebKit::WebNotificationPresenter*
WebSharedWorkerClientProxy::notificationPresenter() {
// TODO(johnnyg): Notifications are not yet hooked up to workers.
// Coming soon.
NOTREACHED();
return NULL;
}
WebApplicationCacheHost* WebSharedWorkerClientProxy::createApplicationCacheHost(
WebKit::WebApplicationCacheHostClient* client) {
WorkerWebApplicationCacheHostImpl* host =
new WorkerWebApplicationCacheHostImpl(stub_->appcache_init_info(),
client);
// Remember the id of the instance we create so we have access to that
// value when creating nested dedicated workers in createWorker.
appcache_host_id_ = host->host_id();
return host;
}
// TODO(abarth): Security checks should use WebDocument or WebSecurityOrigin,
// not WebFrame as the context object because WebFrames can contain different
// WebDocuments at different times.
bool WebSharedWorkerClientProxy::allowDatabase(WebFrame* frame,
const WebString& name,
const WebString& display_name,
unsigned long estimated_size) {
WebSecurityOrigin origin = frame->document().securityOrigin();
if (origin.isUnique())
return false;
bool result = false;
Send(new WorkerProcessHostMsg_AllowDatabase(
route_id_, GURL(origin.toString().utf8()), name, display_name,
estimated_size, &result));
return result;
}
bool WebSharedWorkerClientProxy::allowFileSystem() {
bool result = false;
Send(new WorkerProcessHostMsg_AllowFileSystem(
route_id_, stub_->url().GetOrigin(), &result));
return result;
}
void WebSharedWorkerClientProxy::openFileSystem(
WebKit::WebFileSystemType type,
long long size,
bool create,
WebKit::WebFileSystemCallbacks* callbacks) {
ChildThread::current()->file_system_dispatcher()->OpenFileSystem(
stub_->url().GetOrigin(), static_cast<fileapi::FileSystemType>(type),
size, create, new WebFileSystemCallbackDispatcher(callbacks));
}
bool WebSharedWorkerClientProxy::allowIndexedDB(const WebKit::WebString& name) {
bool result = false;
Send(new WorkerProcessHostMsg_AllowIndexedDB(
route_id_, stub_->url().GetOrigin(), name, &result));
return result;
}
void WebSharedWorkerClientProxy::queryUsageAndQuota(
WebKit::WebStorageQuotaType type,
WebKit::WebStorageQuotaCallbacks* callbacks) {
ChildThread::current()->quota_dispatcher()->QueryStorageUsageAndQuota(
stub_->url().GetOrigin(), static_cast<quota::StorageType>(type),
QuotaDispatcher::CreateWebStorageQuotaCallbacksWrapper(callbacks));
}
void WebSharedWorkerClientProxy::dispatchDevToolsMessage(
const WebString& message) {
if (devtools_agent_)
devtools_agent_->SendDevToolsMessage(message);
}
void WebSharedWorkerClientProxy::saveDevToolsAgentState(
const WebKit::WebString& state) {
if (devtools_agent_)
devtools_agent_->SaveDevToolsAgentState(state);
}
bool WebSharedWorkerClientProxy::Send(IPC::Message* message) {
return WorkerThread::current()->Send(message);
}
void WebSharedWorkerClientProxy::EnsureWorkerContextTerminates() {
// This shuts down the process cleanly from the perspective of the browser
// process, and avoids the crashed worker infobar from appearing to the new
// page. It's ok to post several of theese, because the first executed task
// will exit the message loop and subsequent ones won't be executed.
MessageLoop::current()->PostDelayedTask(FROM_HERE,
base::Bind(
&WebSharedWorkerClientProxy::workerContextDestroyed,
weak_factory_.GetWeakPtr()),
base::TimeDelta::FromSeconds(kMaxTimeForRunawayWorkerSeconds));
}
} // namespace content
|
#include <ansi.h>
inherit NPC;
void create()
{
set_name("血刀老祖--幻", ({ "xuedao laozu-shadow","shadow" }));
set("long",@LONG
這喇嘛身着黃袍,年紀極老,尖頭削耳,臉上都是皺紋。他就是血刀門第四代掌門。
不過仔細一看,似乎不象是真人。
LONG
);
set("title",HIR"血刀門第四代掌門"NOR);
set("gender", "男性");
set("age", 85);
set("attitude", "peaceful");
set("shen_type", -1);
set("str", 130);
set("int", 130);
set("con", 130);
set("dex", 130);
set("max_qi", 311000);
set("max_jing", 101100);
set("neili", 20100);
set("max_neili", 20100);
set("jiali", 50);
set("combat_exp", 180011000);
set("score", 18110000);
set_skill("lamaism", 1150);
set_skill("literate", 1180);
set_skill("force", 1180);
set_skill("parry", 1180);
set_skill("blade", 1180);
set_skill("sword", 1120);
set_skill("dodge", 1180);
set_skill("longxiang-gong", 1180);
set_skill("shenkong-xing", 1180);
set_skill("hand", 1180);
set_skill("dashou-yin", 1180);
set_skill("mingwang-jian", 1120);
set_skill("xuedao-daofa", 2100);
map_skill("force", "longxiang-gong");
map_skill("dodge", "shenkong-xing");
map_skill("hand", "dashou-yin");
map_skill("parry", "xuedao-daofa");
map_skill("blade", "xuedao-daofa");
map_skill("sword", "mingwang-jian");
prepare_skill("hand","dashou-yin");
set("chat_chance_combat", 50);
set("chat_msg_combat", ({
(: perform_action, "blade.shendao" :),
(: perform_action, "blade.shendao" :),
(: perform_action, "blade.shendao" :),
(: perform_action, "blade.shendao" :),
(: perform_action, "blade.shendao" :),
}) );
create_family("雪山寺", 4, "弟子");
set("class", "bonze");
setup();
carry_object("/clone/weapon/blade.c")->wield();
}
int accept_fight(object ob)
{
ob=this_player();
if( !query("fighter", ob)){command("grin");command("say 好,送死的來了!\n");
set("fighter", 1, ob);
set_temp("m_success/幻影", 1, ob);
remove_call_out("kill_ob");
call_out("kill_ob", 1, ob);
return 1;
}
else
write(query("name", ob)+",你已經上過場了!\n");}
void die()
{
object ob; message_vision("\n$N一晃,變為一縷輕煙消失了。\n", this_object());
ob = new("/quest/tulong/npc/shadow2");
ob->move(environment(this_object()));
destruct(this_object());
}
|
/**
* @file
* Ein-/Ausgabe-Modul.
* Das Modul kapselt die Ein- und Ausgabe-Funktionalitaet (insbesondere die GLUT-
* Callbacks) des Programms.
*
* Bestandteil einer Uebung im Rahmen des Moduls Praktikum Grundlagen der Computergrafik
* an der FH Wedel.
*
* @author Nicolas Hollmann, Daniel Klintworth
*/
/* ---- System Header einbinden ---- */
#include <stdlib.h>
#include <stdio.h>
#include <time.h>
#ifdef __APPLE__
#include <GLUT/glut.h>
#else
#include <GL/glut.h>
#endif
/* ---- Eigene Header einbinden ---- */
#include "io.h"
#include "types.h"
#include "input.h"
#include "logic.h"
#include "scene.h"
#include "hud.h"
#include "debugGL.h"
/* ---- Konstanten ---- */
/** Anzahl der Aufrufe der Timer-Funktion pro Sekunde */
#define TIMER_CALLS_PS 60
/* ---- Interne Funktionen ---- */
/**
* Setzt einen Viewport fuer 3-dimensionale Darstellung
* mit perspektivischer Projektion und legt eine Kamera fest.
*
* @param x, y Position des Viewports im Fenster - (0, 0) ist die untere linke Ecke
* @param width, height Breite und Hoehe des Viewports
*/
static void set3DViewport(GLint x, GLint y, GLint width, GLint height)
{
/* Seitenverhaeltnis bestimmen */
double aspect = (double)width / height;
/* Folge Operationen beeinflussen die Projektionsmatrix */
glMatrixMode(GL_PROJECTION);
/* Einheitsmatrix laden */
glLoadIdentity();
/* Viewport-Position und -Ausdehnung bestimmen */
glViewport(x, y, width, height);
/* Perspektivische Darstellung */
gluPerspective(70, /* Oeffnungswinkel */
aspect, /* Seitenverhaeltnis */
0.05, /* nahe Clipping-Ebene */
100); /* ferne Clipping-Ebene */
/* Folge Operationen beeinflussen die Modelviewmatrix */
glMatrixMode(GL_MODELVIEW);
/* Einheitsmatrix laden */
glLoadIdentity();
}
/**
* Setzt einen Viewport fuer 2-dimensionale Darstellung.
*
* @param x, y Position des Viewports im Fenster - (0, 0) ist die untere linke Ecke
* @param width, height Breite und Hoehe des Viewports
*/
static void set2DViewport(GLint x, GLint y, GLint width, GLint height)
{
/* Seitenverhaeltnis bestimmen */
double aspect = (double)width / height;
/* Folge Operationen beeinflussen die Projektionsmatrix */
glMatrixMode(GL_PROJECTION);
/* Einheitsmatrix laden */
glLoadIdentity();
/* Viewport-Position und -Ausdehnung bestimmen */
glViewport(x, y, width, height);
/* Das Koordinatensystem bleibt immer quadratisch */
if (aspect <= 1)
{
gluOrtho2D(-1, 1, /* left, right */
-1 / aspect, 1 / aspect); /* bottom, top */
}
else
{
gluOrtho2D(-1 * aspect, 1 * aspect, /* left, right */
-1, 1); /* bottom, top */
}
/* Folge Operationen beeinflussen die Modelviewmatrix */
glMatrixMode(GL_MODELVIEW);
/* Einheitsmatrix laden */
glLoadIdentity();
}
/**
* Mouse-Button-Callback.
* @param button Taste, die den Callback ausgeloest hat.
* @param state Status der Taste, die den Callback ausgeloest hat.
* @param x X-Position des Mauszeigers beim Ausloesen des Callbacks.
* @param y Y-Position des Mauszeigers beim Ausloesen des Callbacks.
*/
static void cbMouseButton (int button, int state, int x, int y)
{
handleMouseEvent(x, y, mouseButton, button, state);
}
/**
* Mouse-Motion-Callback.
* @param x X-Position des Mauszeigers.
* @param y Y-Position des Mauszeigers.
*/
static void cbMouseMotion (int x, int y)
{
handleMouseEvent(x, y, mouseMotion, 0, 0);
}
/**
* Mouse-Passive-Motion-Callback.
* @param x X-Position des Mauszeigers.
* @param y Y-Position des Mauszeigers.
*/
static void
cbMousePassiveMotion (int x, int y)
{
handleMouseEvent(x, y, mousePassiveMotion, 0, 0);
}
/**
* Callback fuer Tastendruck.
* Ruft Ereignisbehandlung fuer Tastaturereignis auf.
*
* @param key betroffene Taste (In)
* @param x x-Position der Maus zur Zeit des Tastendrucks (In)
* @param y y-Position der Maus zur Zeit des Tastendrucks (In)
*/
static void cbKeyboard(unsigned char key, int x, int y)
{
handleKeyboardEvent(key, GLUT_DOWN, GL_FALSE, x, y);
}
/**
* Callback fuer Tastenloslassen.
* Ruft Ereignisbehandlung fuer Tastaturereignis auf.
*
* @param key betroffene Taste (In)
* @param x x-Position der Maus zur Zeit des Loslassens (In)
* @param y y-Position der Maus zur Zeit des Loslassens (In)
*/
static void cbKeyboardUp(unsigned char key, int x, int y)
{
handleKeyboardEvent(key, GLUT_UP, GL_FALSE, x, y);
}
/**
* Callback fuer Druck auf Spezialtasten.
* Ruft Ereignisbehandlung fuer Tastaturereignis auf.
*
* @param key betroffene Taste (In)
* @param x x-Position der Maus zur Zeit des Tastendrucks (In)
* @param y y-Position der Maus zur Zeit des Tastendrucks (In)
*/
static void cbSpecial(int key, int x, int y)
{
handleKeyboardEvent(key, GLUT_DOWN, GL_TRUE, x, y);
}
/**
* Callback fuer Loslassen von Spezialtasten.
* Ruft Ereignisbehandlung fuer Tastaturereignis auf.
*
* @param key betroffene Taste (In)
* @param x x-Position der Maus zur Zeit des Loslassens (In)
* @param y y-Position der Maus zur Zeit des Loslassens (In)
*/
static void cbSpecialUp(int key, int x, int y)
{
handleKeyboardEvent(key, GLUT_UP, GL_TRUE, x, y);
}
/**
* Timer-Callback.
* Initiiert Berechnung der aktuellen Position und Farben und anschliessendes
* Neuzeichnen, setzt sich selbst erneut als Timer-Callback.
*
* @param lastCallTime Zeitpunkt, zu dem die Funktion als Timer-Funktion
* registriert wurde (In)
*/
static void cbTimer(int lastCallTime)
{
/* Seit dem Programmstart vergangene Zeit in Millisekunden */
int thisCallTime = glutGet(GLUT_ELAPSED_TIME);
/* Seit dem letzten Funktionsaufruf vergangene Zeit in Sekunden */
double interval = (double)(thisCallTime - lastCallTime) / 1000.0f;
if (isPaused())
{
interval = 0.0;
}
/* Spiellogik updaten */
updateLogic(interval);
/* Wieder als Timer-Funktion registrieren */
glutTimerFunc(1000 / TIMER_CALLS_PS, cbTimer, thisCallTime);
/* Neuzeichnen anstossen */
glutPostRedisplay();
}
/**
* Zeichen-Callback.
* Loescht die Buffer, ruft das Zeichnen der Szene auf und tauscht den Front-
* und Backbuffer.
*/
static void cbDisplay(void)
{
/* Fensterdimensionen auslesen */
int width = glutGet(GLUT_WINDOW_WIDTH);
int height = glutGet(GLUT_WINDOW_HEIGHT);
/* Buffer zuruecksetzen */
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
/* 3D Ansicht */
set3DViewport(0, 0, width, height);
if (getAnaglyphMode() == ANAGLYPH_OFF)
{
drawScene3D(EYE_CENTER);
}
else
{
glColorMask(GL_TRUE, GL_FALSE, GL_FALSE, GL_TRUE);
drawScene3D(EYE_LEFT);
glClear(GL_DEPTH_BUFFER_BIT);
glColorMask(GL_FALSE, getAnaglyphMode() == ANAGLYPH_COLOR, GL_TRUE, GL_TRUE);
drawScene3D(EYE_RIGHT);
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
}
/* 2D Minimap */
set2DViewport(width / 3 * 2, height / 3 * 2, width / 3, height / 3);
drawScene2D();
/* HUD */
set2DViewport(0, 0, width, height);
drawHUD();
/* Objekt anzeigen */
glutSwapBuffers();
}
/**
* Registrierung der GLUT-Callback-Routinen.
*/
static void registerCallbacks(void)
{
/* Mouse-Button-Callback (wird ausgefuehrt, wenn eine Maustaste
* gedrueckt oder losgelassen wird) */
glutMouseFunc (cbMouseButton);
/* Mouse-Motion-Callback (wird ausgefuehrt, wenn die Maus bewegt wird,
* waehrend eine Maustaste gedrueckt wird) */
glutMotionFunc (cbMouseMotion);
/* Mouse-Motion-Callback (wird ausgefuehrt, wenn die Maus bewegt wird,
* waehrend keine Maustaste gedrueckt wird) */
glutPassiveMotionFunc (cbMousePassiveMotion);
/* Tasten-Druck-Callback - wird ausgefuehrt, wenn eine Taste gedrueckt wird */
glutKeyboardFunc(cbKeyboard);
/* Tasten-Loslass-Callback - wird ausgefuehrt, wenn eine Taste losgelassen
* wird */
glutKeyboardUpFunc(cbKeyboardUp);
/* Spezialtasten-Druck-Callback - wird ausgefuehrt, wenn Spezialtaste
* (F1 - F12, Links, Rechts, Oben, Unten, Bild-Auf, Bild-Ab, Pos1, Ende oder
* Einfuegen) gedrueckt wird */
glutSpecialFunc(cbSpecial);
/* Spezialtasten-Loslass-Callback - wird ausgefuehrt, wenn eine Spezialtaste
* losgelassen wird */
glutSpecialUpFunc(cbSpecialUp);
/* Automat. Tastendruckwiederholung ignorieren */
glutIgnoreKeyRepeat(1);
/* Timer-Callback - wird einmalig nach msescs Millisekunden ausgefuehrt */
glutTimerFunc(1000 / TIMER_CALLS_PS, /* msecs - bis Aufruf von func */
cbTimer, /* func - wird aufgerufen */
glutGet(GLUT_ELAPSED_TIME)); /* value - Parameter, mit dem
func aufgerufen wird */
/* Display-Callback - wird an mehreren Stellen imlizit (z.B. im Anschluss an
* Reshape-Callback) oder explizit (durch glutPostRedisplay) angestossen */
glutDisplayFunc(cbDisplay);
}
/* ---- Oeffentliche Funktionen ---- */
/**
* Initialisiert das Programm (inkl. I/O und OpenGL) und startet die
* Ereignisbehandlung.
*
* @param title Beschriftung des Fensters (In)
* @param width Breite des Fensters (In)
* @param height Hoehe des Fensters (In)
* @return ID des erzeugten Fensters, 0 im Fehlerfall
*/
int initAndStartIO(char *title, int width, int height)
{
int windowID = 0;
/* Kommandozeile imitieren */
int argc = 1;
char *argv = "cmd";
/* Glut initialisieren */
glutInit(&argc, &argv);
INFO(("Erzeuge Fenster...\n"));
/* Initialisieren des Fensters */
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH); /* fuer DoubleBuffering */
glutInitWindowSize(width, height);
glutInitWindowPosition(
(glutGet(GLUT_SCREEN_WIDTH) - width) / 2,
(glutGet(GLUT_SCREEN_HEIGHT) - height) / 2);
/* Fenster erzeugen */
windowID = glutCreateWindow(title);
if (windowID)
{
INFO(("...fertig.\n\n"));
INFO(("Initialisiere Zufallsgenerator...\n"));
srand(time(0));
INFO(("...fertig.\n\n"));
INFO(("Initialisiere Szene...\n"));
if (initScene())
{
INFO(("...fertig.\n\n"));
INFO(("Initialisiere Logik...\n"));
initLevel(LEVEL_1);
INFO(("...fertig.\n\n"));
INFO(("Registriere Callbacks...\n"));
registerCallbacks();
INFO(("...fertig.\n\n"));
INFO(("Trete in Schleife der Ereignisbehandlung ein...\n"));
glutMainLoop();
}
else
{
INFO(("...fehlgeschlagen.\n\n"));
glutDestroyWindow(windowID);
windowID = 0;
}
}
else
{
INFO(("...fehlgeschlagen.\n\n"));
}
return windowID;
}
|
Join Us on
Add To Favorites
Discover the beauty, and functionality, of ancient Native American pottery! These replicas are a wonderful way to learn more about a culture.
Pottery in any culture is an age-old practice that was originally practical and eventually became seen as an art form. Ancient Native American pieces are beautifully decorated ceramics that are not only functional but a pleasure to look at as well. Choose a style of pottery from one Native nation to replicate.
Based on the style of the pot, choose an armature such as a recycled container or crumpled aluminum foil. Cover this basic form with a thin Crayola® Model Magic® layer.
Add embellishments to your pot by creating ropes, leather cords, feathers, and beads, as appropriate to the culture and time period, and affixing them to your pot. For even edges, cut the compound with Crayola Scissors.
One way to make beads is to cut a long piece of fishing line. Make small Model Magic balls and wrap them around the fishing line about half-way down the length. Leave enough fishing line uncovered so you can use it to wrap around the lip of the pot later. Try making different shapes of beads and alternating colors to get different patterns. Another way to make beads is by wrapping the Model Magic compound around short pieces of plastic straws. You can then string your beads any way you like.
At the end of a string of beads on Native pottery, there are often large decorative feathers. Roll out Model Magic compound into feathers. Combine different colors for multicolor feathers. Then take a craft stick or other modeling tool to etch in the feather’s vein and the edges. Press the feather on the fishing line at the end of your beads. Cut off any fishing line that sticks out on the bottom. Wrap the extra fishing line around the lip of the pot and tie a knot.
You can hide the fishing line by covering the lip with cords or other decorative elements. Adorn your pot with a beautiful rope by braiding three long pieces of Model Magic compound and then wrapping the braid around the lip.
Add leather-like cords by rolling out long pieces of Model Magic compound and pressing them a little flat. Then twist and hang them on your pot. Model Magic® dries to the touch overnight and dries completely in 2 to 3 days.
Study Native American use of animal hides for homes and clothing.
Add To Favorites
How do people communicate when the landscape is as barren and forbidding as Arctic tundra? Make a stone message board In
Make a realistic replica of a Hopi messenger from the spirit world. Incorporate natural objects to explore this traditio
Tell bigger-than-life raven stories from the Tlingit, Salish, and Haida nations. Build a miniature theatre to set the st
Interview relatives about your family history. Write and illustrate the story of your family's past on a scroll you can
Create an original pop-art repetitive portrait based on a study the life and work of Andy Warhol.
Picasso’s art career spanned many decades and included a variety of styles and influences. Create a portrait collage ins
Update an ancient craft with contemporary designs and art materials. These holiday ornaments are light and unbreakable,
Our crayons have been rolling off the assembly line since 1903, and you can see how it’s done.
Visit us »
Be the first to know!
|
-- Brad Delong
Copyright Notice
Saturday, December 18, 2010
Immigration Follow-up
A few days ago I talked about immigration in the context of a down economy and issued a challenge of a sort to Karl Smith. He challenged me back, asking if what I described were the general case or a special case, and if a special case, what makes it different?
Easy as pie.
The basic generalities of economics were developed in the time of Say and Smith. To whatever extent that they were valid then, they were in the context where a large company might have a dozen employees, barriers to entry were not huge, and there was no great opportunity for monopoly.
Contrast today, when the economic landscape is controlled by gigantic trans-national corporations, and energy is controlled by cartels. Which is the special case?
What about booms and busts? What about wars and the aftermath of wars? What span of American history would represent the general case?
To answer your question, population and the economy generally (if I may now use that word) grow together. The fact that GDP/capita has had an approximately constant growth rate over spans of decades is real world evidence that population growth does not lead to rising unemployment. So, one can reasonably surmise that immigration need not directly cause unemployment. If there were a labor shortage, then pretty clearly immigration could help the economy expand.
So, on balance, and despite my protestations, I was talking about something other than what looks like the usual general case over the last hundred years. Whether this constitutes a special case, or one of many possible more-or-less usual cases remains unanswered for lack of a sufficiently broad and detailed historical data base.
My main point was – and remains – that generalities are dangerous, the real world is messy, and dogmatic statements the reflect an absolutist view of economics that is highly questionable if you simply look out your window on any particular Thursday morning, ought not to be held up as examples of clear thinking and an appropriate way of viewing the world.
‘Cuz it seems a bit ad hoc.
And what makes this case different is the liquidity trap at the 0-interest bound, and the aggregate demand shortfall due in part to private deleveraging that causes the unwillingness of corporations to invest in either equipment or employees, compounded by the unwillingness of banks to lend at any level of risk when then can get an absolutely risk-free 0.25% on excess reserves from the Fed. Whether the collapse of the M1 multiplier is cause or effect remains a mystery to me.
We are hovering at the edge of a deflationary spiral, similar to the situation in 1929; but austerity rules the day, almost world wide. I am deeply pessimistic about this turning around any time soon – or even over the next several years. The incoming Repug majority in the House will only make it worse.
No comments:
|
[Note: in Japan, it is customary to refer to a person with their last name first. We have retained this practice in the below excerpt from Kurosawa’s text.]
The gate was growing larger and larger in my mind’s eye. I was location-scouting in the ancient capital of Kyoto for Rashomon, my eleventh-century period film. The Daiei management was not very happy with the project. They said the content was difficult and the title had no appeal. They were reluctant to let the shooting begin. Day by day, as I waited, I walked around Kyoto and the still-more-ancient capital of Nara a few miles away, studying the classical architecture. The more I saw, the larger the image of the Rashomon gate became in my mind.
At first I thought my gate should be about the size of the entrance gate to Toji Temple in Kyoto. Then it became as large as the Tengaimon gate in Nara, and finally as big as the main two-story gates of the Ninnaji and Todaiji temples in Nara. This image enlargement occurred not just because I had the opportunity to see real gates dating from that period, but because of what I was learning, from documents and relics, about the long-since-destroyed Rashomon gate itself.
“Rashomon” actually refers to the Rajomon gate; the name was changed in a Noh play written by Kanze Nobumitsu. “Rajo” indicates the outer precincts of the castle, so “Rajomon” means the main gate to the castle’s outer grounds. The gate for my film Rashomon was the main gate to the outer precincts of the ancient capital--–--–Kyoto was at that time called “Heian-Kyo.” If one entered the capital through the Rajomon gate and continued due north along the main thoroughfare of the metropolis, one came to the Shujakumon gate at the end of it, and the Toji and Saiji temples to the east and west, respectively. Considering this city plan, it would have been strange had the outer main gate not been the biggest gate of all. There is tangible evidence that it in fact was: The blue roof tiles that survive from the original Rajomon gate show that it was large. But, no matter how much research we did, we couldn’t discover the actual dimensions of the vanished structure.
As a result, we had to construct the Rashomon gate to the city based on what we could learn from looking at extant temple gates, knowing that the original was probably different. What we built as a set was gigantic. It was so immense that a complete roof would have buckled the support pillars. Using the artistic device of dilapidation as an excuse, we constructed only half a roof and were able to get away with our measurements. To be historically accurate, the imperial palace and the Shujakumon gate should have been visible looking north through our gate. But on the Daiei back lot such distances were out of the question, and even if we had been able to find the space, the budget would have made it impossible. We made do with a cut-out mountain to be seen through the gate. Even so, what we built was extraordinarily large for an open set.
When I took this project to Daiei, I told them the only sets I would need were the gate and the tribunal courtyard wall where all the survivors, participants and witnesses of the rape and murder that form the story of the film are questioned. Everything else, I promised them, would be shot on location. Based on this low-budget set estimate, Daiei happily took on the project.
Later, Kawaguchi Matsutaro, at that time a Daiei executive, complained that they had really been fed a line. To be sure, only the gate set had to be built, but for the price of that one mammoth set they could have had over a hundred ordinary sets. But, to tell the truth, I hadn’t intended so big a set to begin with. It was while I was kept waiting all that time that my research deepened and my image of the gate swelled to its startling proportions.
When I had finished Scandal for the Shochiku studios, Daiei asked if I wouldn’t direct one more film for them. As I cast about for what to film, I suddenly remembered a script based on the short story “Yabu no naka” (“In a Grove”) by Akutagawa Ryunosuke. It had been written by Hashimoto Shinobu, who had been studying under director Itami Mansaku. It was a very well-written piece, but not long enough to make into a feature film. This Hashimoto had visited my home, and I talked with him for hours. He seemed to have substance, and I took a liking to him. He later wrote the screenplays for Ikiru (1952) and Shichinin no samurai (Seven Samurai, 1954) with me. The script I remembered was his Akutagawa adaptation called “Male-Female.”
Probably my subconscious told me it was not right to have put that script aside; probably I was—without being aware of it–wondering all the while if I couldn’t do something with it. At that moment the memory of it jumped out of one of those creases in my brain and told me to give it a chance. At the same time I recalled that “In a Grove” is made up of three stories, and realized that if I added one more, the whole would be just the right length for a feature film. Then I remembered the Akutagawa story “Rashomon.” Like “In a Grove,” it was set in the Heian period (794-1184). The film Rashomon took shape in my mind.
Since the advent of the talkies in the 1930s, I felt, we had misplaced and forgotten what was so wonderful about the old silent movies. I was aware of the aesthetic loss as a constant irritation. I sensed a need to go back to the origins of the motion picture to find this peculiar beauty again; I had to go back into the past.
In particular, I believed that there was something to be learned from the spirit of the French avant-garde films of the 1920s. Yet in Japan at this time we had no film library. I had to forage for old films, and try to remember the structure of those I had seen as a boy, ruminating over the aesthetics that had made them special.
Rashomon would be my testing ground, the place where I could apply the ideas and wishes growing out of my silent-film research. To provide the symbolic background atmosphere, I decided to use the Akutagawa “In a Grove” story, which goes into the depths of the human heart as if with a surgeon’s scalpel, laying bare its dark complexities and bizarre twists. These strange impulses of the human heart would be expressed through the use of an elaborately fashioned play of light and shadow. In the film, people going astray in the thicket of their hearts would wander into a wider wilderness, so I moved the setting to a large forest. I selected the virgin forest of the mountains surrounding Nara, and the forest belonging to the Komyoji temple outside Kyoto.
There were only eight characters, but the story was both complex and deep. The script was done as straightforwardly and briefly as possible, so I felt I should be able to create a rich and expansive visual image in turning it into a film. Fortunately, I had as cinematographer a man I had long wanted to work with, Miyagawa Kazuo; I had Hayasaka to compose the music and Matsuyama as art director. The cast was Mifune Toshiro, Mori Masayuki, Kyo Machiko, Shimura Takashi, Chiaki Minoru, Ueda Kichijiro, Kato Daisuke and Honma Fumiko; all were actors whose temperaments I knew, and I could not have wished for a better line-up. Moreover, the story was supposed to take place in summer, and we had, ready to hand, the scintillating midsummer heat of Kyoto and Nara. With all these conditions so neatly met, I could ask nothing more. All that was left was to begin the film.
However, one day just before the shooting was to start, the three assistant directors Daiei had assigned me came to see me at the inn where I was staying. I wondered what the problem could be. It turned out that they found the script baffling and wanted me to explain it to them. “Please read it again more carefully,” I told them. “If you read it diligently, you should be able to understand it because it was written with the intention of being comprehensible.” But they wouldn’t leave. “We believe we have read it carefully, and we still don’t understand it at all; that’s why we want you to explain it to us.” For their persistence I gave them this simple explanation:
Human beings are unable to be honest with themselves about themselves. They cannot talk about themselves without embellishing. This script portrays such human beings–the kind who cannot survive without lies to make them feel they are better people than they really are. It even shows this sinful need for flattering falsehood going beyond the grave—even the character who dies cannot give up his lies when he speaks to the living through a medium. Egoism is a sin the human being carries with him from birth; it is the most difficult to redeem. This film is like a strange picture scroll that is unrolled and displayed by the ego. You say that you can’t understand this script at all, but that is because the human heart itself is impossible to understand. If you focus on the impossibility of truly understanding human psychology and read the script one more time, I think you will grasp the point of it.
After I finished, two of the three assistant directors nodded and said they would try reading the script again. They got up to leave, but the third, who was the chief, remained unconvinced. He left with an angry look on his face. (As it turned out, this chief assistant director and I never did get along. I still regret that in the end I had to ask for his resignation. But, aside from this, the work went well.)
During the rehearsals before the shooting I was left virtually speechless by Kyo Machiko’s dedication. She came in to where I was still sleeping in the morning and sat down with the script in her hand. “Please teach me what to do,” she requested, and I lay there amazed. The other actors, too, were all in their prime. Their spirit and enthusiasm was obvious in their work, and equally manifest in their eating and drinking habits.
They invented a dish called Sanzoku-yaki, or “Mountain Bandit Broil,” and ate it frequently. It consisted of beef strips sautéed in oil and then dipped in a sauce made of curry powder in melted butter. But while they held their chopsticks in one hand, in the other they’d hold a raw onion. From time to time they’d put a strip of meat on the onion and take a bite out of it. Thoroughly barbaric.
The shooting began at the Nara virgin forest. This forest was infested with mountain leeches. They dropped out of the trees onto us, they crawled up our legs from the ground to suck our blood. Even when they had had their fill, it was no easy task to pull them off, and once you managed to rip a glutted leech out of your flesh, the open sore seemed never to stop bleeding. Our solution was to put a tub of salt in the entry of the inn. Before we left for the location in the morning we would cover our necks, arms and socks with salt. Leeches are like slugs—they avoid salt.
In those days the virgin forest around Nara harbored great numbers of massive cryptomerias and Japanese cypresses, and vines of lush ivy twined from tree to tree like pythons. It had the air of the deepest mountains and hidden glens. Every day I walked in this forest, partly to scout for shooting locations and partly for pleasure. Once a black shadow suddenly darted in front of me: a deer from the Nara park that had returned to the wild. Looking up, I saw a pack of monkeys in the big trees about my head.
The inn we were housed in lay at the foot of Mount Wakakusa. Once a big monkey who seemed to be the leader of the pack came and sat on the roof of the inn to stare at us studiously throughout our boisterous evening meal. Another time the moon rose from behind Mount Wakakusa, and for an instant we saw the silhouette of a deer framed distinctly against its full brightness. Often after supper we climbed up Mount Wakakusa and formed a circle to dance in the moonlight. I was still young and the cast members were even younger and bursting with energy. We carried out our work with enthusiasm.
When the location moved from the Nara Mountains to the Komyoji temple forest in Kyoto, it was Gion Festival time. The sultry summer sun hit with full force, but even though some members of my crew succumbed to heat stroke, our work pace never flagged. Every afternoon we pushed through without even stopping for a single swallow of water. When work was over, on the way back to the inn we stopped at a beer hall in Kyoto’s downtown Shijo-Kawaramachi district. There each of us downed about four of the biggest mugs of draft beer they had. But we ate dinner without any alcohol and, upon finishing, split up to go about our private affairs. Then at ten o’clock we’d gather again and pour whiskey down our throats with a vengeance. Every morning we were up bright and clear-headed to do our sweat-drenched work.
Where the Komyoji temple forest was too thick to give us the light we needed for shooting, we cut down trees without a moment’s hesitation or explanation. The abbot of Komyoji glared fearfully as he watched us. But as the days went on, he began to take the initiative, showing us where he thought trees should be felled.
When our shoot was finished at the Komyoji location, I went to pay my respects to the abbot. He looked at me with grave seriousness and spoke with deep feeling. “To be honest with you, at the outset we were very disturbed when you went about cutting down the temple trees as if they belonged to you. But in the end we were won over by your wholehearted enthusiasm. ‘Show the audience something good.’ This was the focus of all your energies, and you forgot yourselves. Until I had the chance to watch you, I had no idea that the making of a movie was a crystallization of such effort. I was very deeply impressed.”
The abbot finished and set a folding fan before me. In commemoration of our filming, he had written on the fan three characters forming a Chinese poem: “Benefit All Mankind.” I was left speechless.
We set up a parallel schedule for the use of the Komyoji location and open set of the Rashomon gate. On sunny days we filmed at Komyoji; on cloudy days we filmed the rain scenes at the gate set. Because the gate set was so huge, the job of creating rainfall on it was a major operation. We borrowed fire engines and turned on the studio’s fire hoses to full capacity. But when the camera was aimed upward at the cloudy sky over the gate, the sprinkle of the rain couldn’t be seen against it, so we made rainfall with black ink in it. Every day we worked in temperatures of more than 85º Fahrenheit, but when the wind blew through the wide-open gate with the terrific rainfall pouring down over it, it was enough to chill the skin.
I had to be sure that this huge gate looked huge to the camera. And I had to figure out how to use the sun itself. This was a major concern because of the decision to use the light and shadows of the forest as the keynote of the whole film. I determined to solve the problem by actually filming the sun. These days it is not uncommon to point the camera directly at the sun, but at the time Rashomon was being made it was still one of the taboos of cinematography. It was even thought that the sun’s rays shining directly into your lens would burn the film in your camera. But my cameraman, Miyagawa Kazuo, boldly defied this convention and created superb images. The introductory section in particular, which leads the viewer through the light and shadow of the forest into a world where the human heart loses its way, was truly magnificent camera work. I feel that this scene, later praised at the Venice International Film Festival as the first instance of a camera entering the heart of a forest, was not only one of Miyagawa’s masterpieces but a world-class masterpiece of black-and-white cinematography.
And yet, I don’t know what happened to me. Delighted as I was with Miyagawa’s work, it seems I forgot to tell him. When I said to myself, “Wonderful,” I guess I thought I had said “Wonderful” to him at the same time. I didn’t realize I hadn’t until one day Miyagawa’s old friend Shimura Takashi (who was playing the woodcutter in Rashomon) came to me and said, “Miyagawa’s very concerned about whether his camera work is satisfactory to you.” Recognizing my oversight for the first time, I hurriedly shouted “One hundred percent! One hundred for camera work! One hundred plus!”
There is no end to my recollections of Rashomon. If I tried to write about all of them, I’d never finish, so I’d like to end with one incident that left an indelible impression on me. It has to do with the music.
As I was writing the script, I heard the rhythms of a bolero in my head over the episode of the woman’s side of the story. I asked Hayasaka to write a bolero kind of music for the scene. When we came to the dubbing of that scene, Hayasaka sat down next to me and said, “I’ll try it with the music.” In his face I saw uneasiness and anticipation. My own nervousness and expectancy gave me a painful sensation in my chest. The screen lit up with the beginning of the scene, and the strains of the bolero music softly counted out the rhythm. As the scene progressed, the music rose, but the image and the sound failed to coincide and seemed to be at odds with each other. “Damn it,” I thought. The multiplication of sound and image that I had calculated in my head had failed, it seemed. It was enough to make me break out in a cold sweat.
We kept going. The bolero music rose yet again, and suddenly picture and sound fell into perfect unison. The mood created was positively eerie. I felt an icy chill run down my spine, and unwittingly I turned to Hayasaka. He was looking at me. His face was pale, and I saw that he was shuddering with the same eerie emotion I felt. From that point on, sound and image proceeded with incredible speed to surpass even the calculations I had made in my head. The effect was strange and overwhelming.
And that is how Rashomon was made. During the shooting there were two fires at the Daiei studios. But because we had mobilized the fire engines for our filming, they were already primed and drilled, so the studios escaped with very minor damage.
After Rashomon I made a film of Dostoevsky’s The Idiot (Hakuchi, 1951) for the Shochiku studios. This Idiot was ruinous. I clashed directly with the studio heads, and then when the reviews on the completed film came out, it was as if they were a mirror reflection of the studio’s attitude toward me. Without exception, they were scathing. On the heels of this disaster, Daiei rescinded its offer for me to do another film with them.
I listened to this cold announcement at the Chofu studios of Daiei in the Tokyo suburbs. I walked out through the gate in the gloomy daze, and, not having the will even to get on the train, I ruminated over my bleak situation as I walked all the way home to Komae. I concluded that for some time I would have to “eat cold rice” and resigned myself to this fact. Deciding that it would serve no purpose to get excited about it, I set out to go fishing at the Tamagawa River. I cast my line into the river. It immediately caught on something and snapped in two. Having no replacement with me, I hurriedly put my equipment away. Thinking this was what it was like when bad luck catches up with you, I headed back home.
I arrived home depressed, with barely enough strength to slide open the door to the entry. Suddenly my wife came bounding out. “Congratulations!” I was unwittingly indignant: “For what?” “Rashomon has the Grand Prix.” Rashomon had won the Grand Prix at the Venice International Film Festival, and I was spared from having to eat cold rice.
Once again an angel had appeared out of nowhere. I did not even know that Rashomon had been submitted to the Venice Film Festival. The Japan representative to Italiafilm, Giuliana Stramigioli, had seen it and recommended it to Venice. It was like pouring water into the sleeping ears of the Japanese film industry.
Later Rashomon won the American Academy Award for Best Foreign Language Film. Japanese critics insisted that these two prizes were simply reflections of Westerners’ curiosity and taste for Oriental exoticism, which struck me then, and now, as terrible. Why is it that Japanese people have no confidence in the worth of Japan? Why do they elevate everything foreign and denigrate everything Japanese? Even the woodblock prints of Utamoro, Hokusai and Sharaku were not appreciated by Japanese until they were first discovered by the West. I don’t know how to explain this lack of discernment. I can only despair of the character of my own people.
Excerpted from Something Like an Autobiography, trans., Audie E. Bock. Translation Copyright ©1982 by Vintage Books. Reprinted by permission of Vintage Books, a division of Random House.
|
// Copyright 2017 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "src/ui/lib/escher/scene/camera.h"
#include "src/lib/fxl/logging.h"
#include "src/ui/lib/escher/math/rotations.h"
#include "src/ui/lib/escher/scene/viewing_volume.h"
#include "src/ui/lib/escher/util/debug_print.h"
namespace escher {
static std::pair<float, float> ComputeNearAndFarPlanes(
const ViewingVolume& volume, const mat4& camera_transform) {
float width = volume.width();
float height = volume.height();
float bottom = volume.bottom();
float top = volume.top();
FXL_DCHECK(bottom > top);
vec3 corners[] = {{0, 0, bottom}, {width, 0, bottom},
{0, 0, top}, {width, 0, top},
{0, height, bottom}, {width, height, bottom},
{0, height, top}, {width, height, top}};
// Transform the corners into eye space, throwing away everything except the
// negated Z-coordinate. There are two reasons that we do this; both rely on
// the fact that in Vulkan eye space, the view vector is the negative Z-axis:
// - Z is constant for all planes perpendicular to the view vector, so we
// can use these to obtain the near/far plane distances.
// - A positive Z value is behind the camera, so a negative Z-value must be
// negated to obtain the distance in front of the camera.
//
// The reason for computing these negated Z-coordinates is that the smallest
// one can be directly used as the near plane distance, and the largest for
// the far plane distance.
float negated_z;
float far = FLT_MIN;
float near = FLT_MAX;
for (int i = 0; i < 8; ++i) {
negated_z = -(camera_transform * vec4(corners[i], 1)).z;
near = negated_z < near ? negated_z : near;
far = negated_z > far ? negated_z : far;
}
#ifndef NDEBUG
// The viewing volume must be entirely in front of the camera.
// We can relax this restriction later, but we'll need to develop some
// heuristics.
if (near < 0) {
// Invert the camera matrix to obtain the camera space to world space
// transform from which we can extract the camera position in world space.
mat4 camera_inverse = glm::inverse(camera_transform);
vec3 pos(camera_transform * vec4(0, 0, 0, 1));
vec3 dir(camera_transform * vec4(0, 0, -1, 0));
FXL_LOG(FATAL) << "ViewingVolume must be entirely in front of the "
"camera\nCamera Position: "
<< pos << "\nCamera Direction: " << dir << "\n"
<< volume;
}
#endif
return std::make_pair(near, far);
}
Camera::Camera(const mat4& transform, const mat4& projection)
: transform_(transform), projection_(projection) {}
Camera Camera::NewOrtho(const ViewingVolume& volume) {
// The floor of the stage has (x, y) coordinates ranging from (0,0) to
// (volume.width(), volume.height()); move the camera so that it is above the
// center of the stage. Also, move the camera "upward"; since the Vulkan
// camera points into the screen along the negative-Z axis, this is equivalent
// to moving the entire stage by a negative amount in Z.
mat4 transform = glm::translate(
vec3(-volume.width() / 2, -volume.height() / 2, volume.top() - 10.f));
// This method does not take the transform of the camera as input so there is
// no way to reorient the view matrix outside of this method, so we point it
// down the -Z axis here. The reason we mirror here instead of rotating is
// because glm::orthoRH() produces a "right handed" matrix only in the sense
// that it projects a right handed view space into OpenGL's left handed NDC
// space, and thus it also projects a left handed view space into Vulkan's
// right handed NDC space.
transform = glm::scale(transform, glm::vec3(1.f, 1.f, -1.f));
auto near_and_far = ComputeNearAndFarPlanes(volume, transform);
mat4 projection = glm::orthoRH(
-0.5f * volume.width(), 0.5f * volume.width(), -0.5f * volume.height(),
0.5f * volume.height(), near_and_far.first, near_and_far.second);
return Camera(transform, projection);
}
Camera Camera::NewForDirectionalShadowMap(const ViewingVolume& volume,
const glm::vec3& direction) {
glm::mat4 transform;
RotationBetweenVectors(direction, glm::vec3(0.f, 0.f, -1.f), &transform);
BoundingBox box = transform * volume.bounding_box();
constexpr float kStageFloorFudgeFactor = 0.0001f;
const float range = box.max().z - box.min().z;
const float near = -box.max().z - (kStageFloorFudgeFactor * range);
const float far = -box.min().z + (kStageFloorFudgeFactor * range);
glm::mat4 projection =
glm::ortho(box.min().x, box.max().x, box.min().y, box.max().y, near, far);
return Camera(transform, projection);
}
Camera Camera::NewPerspective(const ViewingVolume& volume,
const mat4& transform, float fovy) {
auto near_and_far = ComputeNearAndFarPlanes(volume, transform);
float aspect = volume.width() / volume.height();
mat4 projection =
glm::perspectiveRH(fovy, aspect, near_and_far.first, near_and_far.second);
// glm::perspectiveRH() generates "right handed" projection matrices but
// since glm is intended to work with OpenGL, glm::perspectiveRH() generates
// a matrix that projects a right handed space into OpenGL's left handed NDC
// space. In order to make it project a right handed space into Vulkan's
// right handed NDC space we must flip it again. Note that this is equivilent
// to calling glm::perspectiveLH with the same arguments and rotating the
// resulting matrix 180 degrees around the X axis.
projection = glm::scale(projection, glm::vec3(1.f, -1.f, 1.f));
return Camera(transform, projection);
}
vk::Rect2D Camera::Viewport::vk_rect_2d(uint32_t fb_width,
uint32_t fb_height) const {
vk::Rect2D result;
result.offset.x = x * fb_width;
result.offset.y = y * fb_height;
result.extent.width = width * fb_width;
result.extent.height = height * fb_height;
return result;
}
} // namespace escher
|
St. Francis, Animals and the Environment
Dr. Marcellino D'Ambrosio
You often see a garden statue of him with a bird on his shoulder. Yes, St. Francis of Assisi did have a special relationship with animals. He preached to the birds, pacified a wolf, and put together an animal cast for what is regarded as the very first live nativity scene.
But he had no interest in “the environment.” No feeling for it whatsoever.
Instead, he was in love with creation. And that’s because he was in love with the Creator, who he regarded not as some cosmic force or distant, detached monarch, but as “Father.” He so much loved God his Father that he had great affection for anything related to God – the sacraments, the Church, its very imperfect ministers, broken down country chapels, and all of God’s marvelous works of art – human beings first and foremost, but also the animals and even the inanimate objects that adorn the heavens and the earth.
The fondness for and kinship St. Francis felt with “brother son and sister moon” was truly a gift. But it is gift that we all receive when we receive the Holy Spirit since it is one of the seven gifts mentioned in Isaiah 11:2-3. At least this is how St. Thomas Aquinas and many after him explained this beautiful, supernatural gift of piety. The natural virtue of piety was extolled by the Greeks and Romans – a love of those who gave you life, first and foremost your parents and after them, your fatherland. This entailed also a respect and affection for all that is connected with your parents and dear to them as well– your grandparents, uncles and aunts, and in the case of your country, its flag, its national anthem, its history and its heros. The term for piety towards one’s country is “patriotism” which actually has at its root, the term “pater” or “father.”
St. Francis loved his home town of Assisi. But his deeper patriotism was for the Kingdom of God. His affection for the Kingdom included respect and reverence for all the King’s creatures and subjects, whether they be great or small.
Now, this does not mean that Francis saw all creatures as his equals, as some animal rights advocates today seem to do. One animal rights philosopher, Peter Singer, goes so far as to teach that adult whales and chimpanzees are actually superior to human fetuses and infants in both dignity and value. He would save the whales but allow both abortion and infanticide.
St. Francis would be appalled at such a concept. Biblical person that he was, he understood that woman and man are God’s supreme masterpieces, made in His image and likeness, unlike the animals. Human beings are given dominion over the rest of creation in Genesis 2 not to exploit however, but to cultivate, care for, and perfect. God entrusts Adam and Eve not with “the environment,” but with “the Garden” – a place of beauty in which we are made to walk with God.
So St. Francis loves the birds, but also presses them into the service of the gospel. He saves the wolf of Gubbio from the wrath of angry townspeople, but rebukes it for its ferocity and calls men and wolf to live everafter in harmony. And the animals of the nativity scene? They are companions of the infant who is the Word made flesh.
So the authentic biblical and Catholic approach to “the environment” is not to see it coldly and scientifically as “the environment.” But rather, in the fashion of St. Francis, to approach it as the expression of the Father’s beauty, as the gift of the Father’s love, as an icon, a window to the new creation. Reckless exploitation would never fit with such a vision. But neither would some secular environmentalism.
Follow Us -
Join us on Facebook
Join us on Twitter
Dr. Marcellino D’Ambrosio writes from Texas. For more information on his resources and his pilgrimages to Italy and the Holy Land, visit www.crossroadsinitiative.com or call 1.800.803.0118.
Click here to download and print, click here!
For more Catholic resources to feed your faith, visit the Crossroads Initiative Homepage.
To sign up for our free weekly e-mail with Dr. D'Ambrosio's commentary on the Sunday readings, liturgical feasts, updates on where Dr. D will be speaking, a chance to WIN a FREE CD and MORE, CLICK HERE!
Personal Prayer: Pathway to Joy
Marcellino D'Ambrosio, Ph.D.
Everyone knows that personal prayer is important. You can't expect to deepen a relationship with God talking with Him only once a week! But how, in the midst of the busy, noisy life we all lead, can we develop a pattern of daily prayer that really works? And if we are successful in carving out some moments for prayer, what do we do? How should we spend that time in way that would be most fruitful?
Dr. Marcellino D'Ambrosio has taught spiritual theology academically, but, more importantly, he's had plenty of practice applying that tradition to everyday life. With a family of seven, a business, and a non-profit corporation to run, he knows the challenges that a busy, active life can pose to the Christian who wants to pray. In this talk, he lays down principles and gives practical suggestions on how busy laypeople can develop a prayer life that leads to joy and personal transformation.
CD - $8.95
The Seven Deadly Sins - 3 CD Set
What are the Seven Deadly Sins? There are books written about them and movies made about them, but what are they?
From about the fifth century, Christian spiritual writers identified seven patterns of sin that, if not broken, would lead to spiritual death. In this fascinating series by Dr. Marcellino D'Ambrosio, we learn the destructive, addictive dynamics of these seven vices and how they infiltrate, and ultimately take over people's lives. Most importantly, we find out how to get free of the chains forged by these sins and the necessary qualities to cultivate to make us immune to them in the future.
Beyond the Birds and the Bees
"The Talk." It's one of the most daunting prospects parents face. Communicating the richness of Catholic teaching on sexuality in a faithful and effective way can be an overwhelming responsibility. But does it have to be so?
In this thoroughly revised version of Beyond the Birds and the Bees, Greg and Lisa Popcak empower you with the tools needed to move well beyond "the Talk" by offering a comprehensive guide to raising sexually whole and holy children. Using the riches of Blessed John Paul II's Theology of the Body, the Popcaks help you safely navigate your children from infancy through the teenage years and beyond.
Building Our House on Rock: The Sermon On The Mount
Jesus’ Sermon on the Mount ends with the parable of the builders on rock or sand. Doing what Jesus asks results in building a life that endures; not doing it results in disaster. The choice is ours, and it’s a scary one. How can we read these words so that we can know what Jesus meant and do it?
|
#include <PancakeTestData.h>
#include <gtest/gtest.h>
#include <pacbio/pancake/AlignerBatchCPU.h>
#include <pacbio/pancake/AlignerFactory.h>
#include <pacbio/pancake/AlignmentParameters.h>
#include <iostream>
#include "TestHelperUtils.h"
TEST(AlignerBatchCPU, ArrayOfTests_Small)
{
using namespace PacBio::Pancake;
struct TestData
{
std::string testName;
// Tuple: <query, target, isGlobal>
std::vector<std::tuple<std::string, std::string, bool>> batchData;
int32_t numThreads;
std::vector<PacBio::Pancake::AlignmentResult> expectedAlns;
};
// clang-format off
std::vector<TestData> testData = {
{
"Batch of multiple query/target pairs.",
{
// Global alignment.
{"ACTGACTGAC", "ACTGTCTGAC", true},
{"ACTG", "ACTG", true},
{"A", "T", true},
// Extension alignment.
{"AAAAAAAAAAAAAAAAAAAACCCCCACCCCCCCCCCCCCCCCCCCCCCCCC", "AAAAAAAAAAAAAAAAAAAAGGGGGAGGGGGGGGGGGGGGGGGGGGGGGGG", false},
},
4,
// Expected results.
{
PacBio::Pancake::AlignmentResult{PacBio::BAM::Cigar("4=1X5="), 10, 10, 10, 10, true, 14, 14, false, Alignment::DiffCounts(9, 1, 0, 0)},
PacBio::Pancake::AlignmentResult{PacBio::BAM::Cigar("4="), 4, 4, 4, 4, true, 8, 8, false, Alignment::DiffCounts(4, 0, 0, 0)},
PacBio::Pancake::AlignmentResult{PacBio::BAM::Cigar("1X"), 1, 1, 1, 1, true, -4, -4, false, Alignment::DiffCounts(0, 1, 0, 0)},
PacBio::Pancake::AlignmentResult{PacBio::BAM::Cigar("20="), 20, 20, 19, 19, true, 40, 40, true, Alignment::DiffCounts(20, 0, 0, 0)},
},
},
{
"Empty batch.",
{
},
4,
// Expected results.
{
},
},
{
"Another batch of edge cases.",
{
// Global alignment.
{"", "", true},
{"A", "", true},
{"", "A", true},
{"A", "T", true},
},
4,
// Expected results.
{
PacBio::Pancake::AlignmentResult{PacBio::BAM::Cigar(""), 0, 0, 0, 0, false, 0, 0, false, Alignment::DiffCounts(0, 0, 0, 0)},
PacBio::Pancake::AlignmentResult{PacBio::BAM::Cigar("1I"), 1, 0, 1, 0, true, -4, -4, false, Alignment::DiffCounts(0, 0, 1, 0)},
PacBio::Pancake::AlignmentResult{PacBio::BAM::Cigar("1D"), 0, 1, 0, 1, true, -4, -4, false, Alignment::DiffCounts(0, 0, 0, 1)},
PacBio::Pancake::AlignmentResult{PacBio::BAM::Cigar("1X"), 1, 1, 1, 1, true, -4, -4, false, Alignment::DiffCounts(0, 1, 0, 0)},
},
},
};
// clang-format on
// Alignment parameters.
PacBio::Pancake::AlignerType alignerTypeGlobal = AlignerType::EDLIB;
PacBio::Pancake::AlignmentParameters alnParamsGlobal;
PacBio::Pancake::AlignerType alignerTypeExt = AlignerType::KSW2;
PacBio::Pancake::AlignmentParameters alnParamsExt;
// These parameters are the same as current defaults, but they are specified here in case
// the defaults ever change, so that the test doesn't have to be updated.
alnParamsGlobal.zdrop = 100;
alnParamsGlobal.zdrop2 = 500;
alnParamsGlobal.alignBandwidth = 500;
alnParamsGlobal.endBonus = 50;
alnParamsGlobal.matchScore = 2;
alnParamsGlobal.mismatchPenalty = 4;
alnParamsGlobal.gapOpen1 = 4;
alnParamsGlobal.gapExtend1 = 2;
alnParamsGlobal.gapOpen2 = 24;
alnParamsGlobal.gapExtend2 = 1;
alnParamsExt = alnParamsGlobal;
for (const auto& data : testData) {
// Debug info.
SCOPED_TRACE(data.testName);
std::cerr << "testName = " << data.testName << "\n";
PacBio::Pancake::AlignerBatchCPU aligner(data.numThreads, alignerTypeGlobal,
alnParamsGlobal, alignerTypeExt, alnParamsExt);
for (const auto& seqPair : data.batchData) {
const auto& query = std::get<0>(seqPair);
const auto& target = std::get<1>(seqPair);
const bool isGlobal = std::get<2>(seqPair);
aligner.AddSequencePair(query.c_str(), query.size(), target.c_str(), target.size(),
isGlobal);
}
// Run alignment.
aligner.AlignAll();
const std::vector<PacBio::Pancake::AlignmentResult>& results = aligner.GetAlnResults();
// std::cerr << "results.size() = " << results.size() << "\n";
// for (size_t i = 0; i < results.size(); ++i) {
// const auto& aln = results[i];
// std::cerr << "[result " << i << "] " << aln << "\n";
// }
// Evaluate.
ASSERT_EQ(data.expectedAlns, results);
}
}
|
CTComms sends on average 2 million emails monthly on behalf of over 125 different charities and not for profits.
Take the complexity of technology and stir in the complexity of the legal system and what do you get? Software licenses! If you've ever attempted to read one you know how true this is, but you have to know a little about software licensing even if you can't parse all of the fine print.
By: Chris Peters
March 10, 2009
A software license is an agreement between you and the owner of a program which lets you perform certain activities which would otherwise constitute an infringement under copyright law. The software license usually answers questions such as:
The price of the software and the licensing fees, if any, are sometimes discussed in the licensing agreement, but usually it's described elsewhere.
If you read the definitions below and you're still scratching your head, check out Categories of Free and Non-Free Software which includes a helpful diagram.
Free vs Proprietary:
When you hear the phrase "free software" or "free software license," "free" is referring to your rights and permissions ("free as in freedom" or "free as in free speech"). In other words, a free software license gives you more rights than a proprietary license. You can usually copy, modify, and redistribute free software without paying a fee or obtaining permission from the developers and distributors. In most cases "free software" won't cost you anything, but that's not always the case – in this instance the word free is making no assertion whatsoever about the price of the software. Proprietary software puts more restrictions and limits on your legal permission to copy, modify, and distribute the program.
Free, Open-Source or FOSS?
In everyday conversation, there's not much difference between "free software," "open source software," and "FOSS (Free and Open-Source Software)." In other words, you'll hear these terms used interchangeably, and the proponents of free software and the supporters of open-source software agree with one another on most issues. However, the official definition of free software differs somewhat from the official definition of open-source software, and the philosophies underlying those definitions differ as well. For a short description of the difference, read Live and Let License. For a longer discussion from the "free software" side, read Why Open Source Misses the Point of Free Software. For the "open-source" perspective, read Why Free Software is Too Ambiguous.
Public domain and copyleft.
These terms refer to different categories of free, unrestricted licensing. A copyleft license allows you all the freedoms of a free software license, but adds one restriction. Under a copyleft license, you have to release any modifications under the same terms as the original software. In effect, this blocks companies and developers who want to alter free software and then make their altered version proprietary. In practice, almost all free and open-source software is also copylefted. However, technically you can release "free software" that isn't copylefted. For example, if you developed software and released it under a "public domain" license, it would qualify as free software, but it isn't copyleft. In effect, when you release something into the public domain, you give up all copyrights and rights of ownership.
Shareware and freeware.
These terms don't really refer to licensing, and they're confusing in light of the discussion of free software above. Freeware refers to software (usually small utilities at sites such as Tucows.com) that you can download and install without paying. However, you don't have the right to view the source code, and you may not have the right to copy and redistribute the software. In other words, freeware is proprietary software. Shareware is even more restrictive. In effect, shareware is trial software. You can use it for a limited amount of time (usually 30 or 60 days) and then you're expected to pay to continue using it.
End User Licensing Agreement (EULA).
When you acquire software yourself, directly from a vendor or retailer, or directly from the vendor's Web site, you usually have to indicate by clicking a box that you accept the licensing terms. This "click-through" agreement that no one ever reads is commonly known as a EULA. If you negotiate a large purchase of software with a company, and you sign a contract to seal the agreement, that contract usually replaces or supersedes the EULA.
Most major vendors of proprietary software offer some type of bulk purchasing and volume licensing mechanism. The terms vary widely, but if you order enough software to qualify, the benefits in terms of cost and convenience are significant. Also, not-for-profits sometimes qualify for it with very small initial purchases.
Some of the benefits of volume licensing include:
Lower cost. As with most products, software costs less when you buy more of it.
Ease of installation. Without volume licenses, you usually have to enter a separate activation code (also known as a product key or license key) for each installed copy of the program. On the other hand, volume licenses provide you with a single, organisation-wide activation code, which makes it much easier to find when you need to reinstall the software.
Easier tracking of licenses. Keeping track of how many licenses you own, and how many copies you've actually installed, is a tedious, difficult task. Many volume licensing programs provide an online account which is automatically updated when you obtain or activate a copy of that company's software. These accounts can also coordinate licensing across multiple offices within your organisation.
To learn more about volume licensing from a particular vendor, check out some of the resources below:
Qualified not-for-profits and libraries can receive donated volume licenses for Microsoft products through TechSoup. For more information, check out our introduction to the Microsoft Software Donation Program, and the Microsoft Software Donation Program FAQ. For general information about the volume licensing of Microsoft software, see Volume Licensing Overview.
If you get Microsoft software from TechSoup or other software distributors who work with not-for-profits, you may need to go to the eOpen Web site to locate your Volume license keys. For more information, check out the TechSoup Donation Recipient's Guide to the Microsoft eOpen Web Site.
Always check TechSoup Stock first to see if there's a volume licensing donation program for the software you're interested in. If TechSoup doesn't offer that product or if you need more copies than you can find at TechSoup, search for "volume licensing not-for-profits software" or just "not-for-profits software." For example, when we have an inventory of Adobe products, qualifying and eligible not-for-profits can obtain four individual products or one copy of Creative Suite 4 through TechSoup. If we're out of stock, or you've used up your annual Adobe donation, you can also check TechSoup's special Adobe donation program and also Adobe Solutions for Nonprofits for other discounts available to not-for-profits. For more software-hunting tips, see A Quick Guide to Discounted Software Programs.
Pay close attention to the options and licensing requirements when you acquire server-based software. You might need two different types of license – one for the server software itself, and a set of licenses for all the "clients" accessing the software. Depending on the vendor and the licensing scenario, "client" can refer either to the end users themselves (for example, employees, contractors, clients, and anyone else who uses the software in question) or their computing devices (for example, laptops, desktop computers, smartphones, PDAs, etc.). We'll focus on Microsoft server products, but similar issues can arise with other server applications.
Over the years, Microsoft has released hundreds of server-based applications, and the licensing terms are slightly different for each one. Fortunately, there are common license types and licensing structures across different products. In other words, while a User CAL (Client Access License) for Windows Server is distinct from a User CAL for SharePoint Server, the underlying terms and rights are very similar. The TechSoup product pages for Microsoft software do a good job of describing the differences between products, so we'll focus on the common threads in this article.
Moreover, Microsoft often lets you license a single server application in more than one way, depending on the needs of your organisation. This allows you the flexibility to choose the licenses that best reflect your organisation's usage patterns and thereby cost you the least amount of money. For example, for Windows Server and other products you can acquire licenses on a per-user basis (for example, User CALs) or per-device basis (for example, Device CALs).
The license required to install and run most server applications usually comes bundled with the software itself. So you can install and run most applications "out of the box," as long as you have the right number of client licenses (see the section below for more on that). However, when you're running certain server products on a computer with multiple processors, you may need to get additional licenses. For example, if you run Windows Server 2008 DataCenter edition on a server with two processors, you need a separate license for each processor. SQL Server 2008 works the same way. This type of license is referred to as a processor license. Generally you don't need client licenses for any application that's licensed this way.
Client Licenses for Internal Users
Many Microsoft products, including Windows Server 2003 and Windows Server 2008, require client access licenses for all authenticated internal users (for example, employees, contractors, volunteers, etc.). On the other hand, SQL Server 2008 and other products don't require any client licenses. Read the product description at CTXchange if you're looking for the details about licensing a particular application.
User CALs: User CALs allow each user access to all the instances of a particular server product in an organisation, no matter which device they use to gain access. In other words, if you run five copies of Windows Server 2008 on five separate servers, you only need one User CAL for each person in your organisation who access those servers (or any software installed on those servers), whether they access a single server, all five servers, or some number in between. Each user with a single CAL assigned to them can access the server software from as many devices as they want (for example, desktop computers, laptops, smartphones, etc.). User CALs are a popular licensing option.
Device CALs: Device CALs allow access to all instances of a particular server application from a single device (for example, a desktop computer, a laptop, etc.) in your organisation. Device CALs only make sense when multiple employees use the same computer. For example, in 24-hour call centres different employees on different shifts often use the same machine, so Device CALs make sense in this situation.
Choosing a licensing mode for your Windows Server CALs: With Windows Server 2003 and Windows Server 2008, you use a CAL (either a User CAL or a Device CAL) in one of two licensing modes: per seat or per server. You make this decision when you're installing your Windows Server products, not when you acquire the CALs. The CALs themselves don't have any mode designation, so you can use either a User CAL or a Device CAL in either mode. Per seat mode is the default mode, and the one used most frequently. The description of User CALs and Device CALs above describes the typical per seat mode. In "per server" mode, Windows treats each license as a "simultaneous connection." In other words, if you have 40 CALs, Windows will let 40 authenticated users have access. The 41st user will be denied access. However, in per server mode, each CAL is tied to a particular instance of Windows Server, and you have to acquire a new set of licenses for each new server you build that runs Windows. Therefore, per server mode works for some small organisations with one or two servers and limited access requirements.
You don't "install" client licenses the way you install software. There are ways to automate the tracking of software licenses indirectly, but the server software can't refuse access to a user or device on licensing grounds. The licenses don't leave any "digital footprint" that the server software can read. An exception to this occurs when you license Windows Server in per server mode. In this case, if you have 50 licenses, the 51st authenticated user will be denied access (though anonymous users can still access services).
Some key points to remember about client licensing:
The licensing scenarios described in this section arise less frequently, and are too complex to cover completely in this article, so they're described briefly below along with more comprehensive resources.
You don't need client licenses for anonymous, unauthenticated external users. In other words, if someone accesses your Web site, and that site runs on Internet Information Server (IIS), Microsoft's Web serving software, you don't need a client license for any of those anonymous users.
If you have any authenticated external users who access services on your Windows-based servers, you can obtain CALs to cover their licensing requirements. However, the External Connector License (ECL) is a second option in this scenario. The ECL covers all use by authenticated external users, but it's a lot more expensive than a CAL, so only get one if you'll have a lot of external users. For example, even if you get your licenses through the CTXchange donation program, an ECL for Windows Server 2008 has an £76 administrative fee, while a User CAL for Windows Server 2008 carries a £1 admin fee. If only a handful of external users access your Windows servers, you're better off acquiring User CALs. Also, an ECL only applies to external users and devices. In other words, if you have an ECL, you still have to get a CAL for all employees and contractors.
Even though Terminal Services (TS) is built into Windows Server 2003 and 2008, you need to get a separate TS CAL for each client (i.e. each user or each device) that will access Terminal Services in your organisation. This TS license is in addition to your Windows Server CALs.
Microsoft's System Centre products (a line of enterprise-level administrative software packages) use a special type of license known as a management license (ML). Applications that use this type of licensing include System Center Configuration Manager 2007 and System Center Operations Manager 2007. Any desktop or workstation managed by one of these applications needs a client management license. Any server managed by one of these applications requires a server management license, and there are two types of server management licenses – standard and enterprise. You need one or the other but not both. There are also special licensing requirements if you're managing virtual instances of Windows operating systems. For more information, see TechSoup's Guide to System Center Products and Licensing and Microsoft's white paper on Systems Center licensing.
Some Microsoft server products have two client licensing modes, standard and enterprise. As you might imagine, an Enterprise CAL grants access to more advanced features of a product. Furthermore, with some products, such as Microsoft Exchange, the licenses are additive. In other words, a user needs both a Standard CAL AND an Enterprise CAL in order to access the advanced features. See Exchange Server 2007 Editions and Client Access Licenses for more information.
With virtualisation technologies, multiple operating systems can run simultaneously on a single physical server. Every time you install a Microsoft application, whether on a physical hardware system or a virtual hardware system, you create an "instance" of that application. The number of "instances" of particular application that you can run using a single license varies from product to product. For more information see the Volume Licensing Briefs, Microsoft Licensing for Virtualization and the Windows Server Virtualization Calculator. For TechSoup Stock products, see the product description for more information.
There are a lot of nuances to Microsoft licensing, and also a lot of excellent resources to help you understand different scenarios.
About the Author:
Chris is a former technology writer and technology analyst for TechSoup for Libraries, which aims to provide IT management guidance to libraries. His previous experience includes working at Washington State Library as a technology consultant and technology trainer, and at the Bill and Melinda Gates Foundation as a technology trainer and tech support analyst. He received his M.L.S. from the University of Michigan in 1997.
Originally posted here.
Copyright © 2009 CompuMentor. This work is published under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 License.
The latest version of Microsoft Office Professional Plus is an integrated collection of programs, servers, and services designed to work together to enable optimised information work.
|
#ifndef READLINE_UTILS_H
#define READLINE_UTILS_H
#include <memory>
#include <vector>
#include <string>
#include "text.h"
#include "complete.h"
namespace readline {
std::vector<std::string> SplitArgs(const std::string& line, int line_pos);
bool CheckNewArg(const std::string& line, int line_pos);
enum class ListDirType {
FILES,
DIR,
FILES_DIR
};
std::vector<std::string> ListDir(const std::string& dir, ListDirType t);
void MatchArg(const std::string& arg, List* list);
std::vector<std::string> MatchArg(const std::string& arg,
std::vector<std::string>& list);
std::vector<ItemDescr> MatchArg(const std::string& arg,
std::vector<ItemDescr>& list);
std::unique_ptr<List> MatchDirList(const std::vector<std::string>& args);
// parser the parser in last part and the rest, for example:
// /home/alex/test
// test is the last part and /home/alex/ is the rest
// supress point avoid that last part return as "." pwd
std::tuple<std::string, std::string> ParserPath(const std::string& arg,
bool supress_point = true);
std::string DirectoryFormat(const std::string& dir);
bool IsDirectory(const std::string path);
std::tuple<std::unique_ptr<List>, RetType, bool> RetDirFileList(
const std::vector<std::string>& params, bool tip, ListDirType type);
std::tuple<std::unique_ptr<List>, RetType, bool> RetList(
std::vector<std::string>&& plist, const std::vector<std::string>& params,
bool tip);
std::tuple<std::unique_ptr<List>, RetType, bool> RetList(
std::vector<ItemDescr>&& plist_descr,
const std::vector<std::string>& params, bool tip);
std::wstring str2wstr(const std::string& str);
std::string wstr2str(const std::wstring& wstr);
}
#endif // READLINE_UTILS_H
|
International security authorities spent close to two years pursuing a criminal site called Darkode, where hackers could buy and sell malware meant to steal information. On the international site, which could only be accessed with a referral and a password, hackers advertised and sold their homemade software. Criminals who bought it could steal anything from Facebook follower lists to database account passwords.
The sophistication of Darkode shows just how organized hacking has become. The eventual government takedown didn’t stop the site altogether, either. Darkode was resurrected with improved security, showing that although many people were arrested in the sting, several key players were able to escape prosecution and get back to business.
Law firms are especially tempting to cyber criminals because of the value of the sensitive information stored on their networks. A majority of law firms have experienced some sort of hacking, with law firms that handle government contracts and international business being targeted most often. About 80% of the largest 100 law firms have experienced some sort of violation. The sensitive information on lawyers’ computers can be invaluable to foreign governments, stakeholders and investors, and perhaps most worrisome, criminals.
As quickly as we build new technology to keep criminals out, hackers are working around the clock, and using sophisticated tools like Darkode to penetrate your security.
Why Are Law Firms so Susceptible to Hackers?
Law firms are hesitant to go public and share information because exposing data breaches could compromise their reputation and potential clients’ trust. The problem with this lack of openness is that law firms aren’t able to learn from one other’s experiences. The FBI is currently making efforts to work privately with law firms to learn about their hacking experiences and to offer assistance when firms experience attacks.
Common Hacking Tactics
The leading hacking technique used on law firms is spearfishing, a targeted attack against a specific organization. In a spearfishing approach, hackers spend a significant amount of time researching a company so they can infiltrate it. They may send personalized emails engineered to motivate people to respond quickly. The emails themselves can’t harm you, but responding to them definitely can.
Because of the sophistication and attention to detail involved in spearfishing attacks, these emails are often very believable. Law firms are also especially vulnerable to ransomware, which encrypts a firm’s information and then demands a ransom for its restoration.
Hackers also use social engineering to get into law firms’ systems. Social engineering works because the people who give out the information may think they are giving out harmless information. However, hackers use this seemingly innocuous information to get into accounts and databases. If someone asks where you went to college, you might not bat an eye before answering. However, imagine all the different accounts you’ve signed up for online. Somewhere, the security question might be “What was the mascot of your college?”
Once a hacker uses social engineering to gain access to some of your personal information, they can use it to gain your trust in spearfishing campaigns.
Keeping Your Firm Secure
While it may seem like the biggest law firms would be most tempting to hackers, small firms have also become a target of enterprising thieves. Being attacked by hackers costs firms money, and large law firms invest in security. In contrast, about 90% of small and medium businesses lack any protection on their customer information and email. Because small businesses spend less on security, cybercriminals see them as easy prey.
Be Vigilant
Treat your electronic information as if it were an extremely valuable asset that criminals are actively trying to take. It is. You wouldn’t leave your actual files outside for anyone to take, so be just as cautious with your electronic records.
Don’t bypass extra security measures. Use them. For example, Gmail’s two-step verification makes it much more difficult for anyone to compromise your account.
Know What to Look For
Learn how to spot unauthentic emails. Keep in mind that hackers are getting smarter at making fake mail look like real mail. Look out for offers that are too good to be true, vague details or addresses, misspellings, and grammatical mistakes. Reputable companies have copy editors. Malicious hackers generally don’t. On the other hand, some of the more sophisticated hackers can afford copy editors too. A professional-looking email can still be dangerous.
Stay Up to Date
Have fire drills. Because hackers are constantly evolving, so must security. Send fake phishing emails to your employees to see how they respond. One of the best things lawyers can do to avoid hacking is to stay educated about the current attack vectors, especially those used against law firms.
It is your responsibility to ensure the safety of your clients’ information.
Use Common Sense
Finally, but perhaps most important, adopt the skills you use in your personal accounts in your business accounts. Even if a password does not require a lowercase and uppercase letter, symbols, and numbers, why not use them anyway? They work. Do not use the same password across all of your sites. Change your passwords often. In the end, building the most secure system in the world is pointless if the password is 1234.
generic-template-formStart securing your law firm’s data today with our 4-Step Computer Security Guide.
Featured image: “Hacker in Work. High Speed Computer Keyboard Typing by Professional Hacker. Hacking the Internet Photo Concept.” from Shutterstock.
Leave a Reply
|
- Our Story
- In Memory
Vaccination and Immunotherapy for Alzheimer’s Disease
Vaccination against amyloid is a promising approach for the development of Alzheimer’s disease (AD) therapeutics. Approximately half of the investigational new therapeutics in human clinical trials for AD are active or passive immunotherapeutics.
Active vaccination involves the injection of an antigen and relies on the production of antibodies in the vaccinated patient. Four human clinical trials of active vaccination currently are under way. Passive immunization is also a promising strategy that involves the production of antibodies outside of the patient and injection of these antibodies. There are currently 12 clinical trials of passive immunization. You can check for Alzheimer therapeutics in human clinical trials by visiting www.clinicaltrials.gov and searching for key words “Alzheimer’s and immunotherapy.”
Thinking out of the box
The development of vaccinations as a strategy for treating or preventing Alzheimer’s is an example of thinking out of the box. Vaccinations commonly are associated with infectious diseases, like influenza, small pox and polio, which appear to have little in common with neurodegenerative diseases, like Alzheimer’s. Moreover, the brain is an immunoprivileged site with little access to antibodies, so it seems unlikely antibodies would be protective in the brain.
Researchers were pleasantly surprised when Dale Schenk and co-workers at Elan Inc. reported that vaccination of transgenic mouse models of AD against the amyloid Aß peptide prevented amyloid deposition in young animals and removed pre-existing amyloid deposits in older animals. Subsequent work showed that immunization against Aß prevented or reversed many other pathological features and prevented cognitive dysfunction in transgenic mice and non-human primates. This vaccine (Elan AN1792) was tested in human clinical trials, where it showed similar beneficial effects of removing amyloid deposits and slowing cognitive decline in patients with significant levels of anti-Aß antibodies, but the clinical trial was halted because 6 percent of the patients developed meningoencephalitis, an inflammatory side effect.
Second-generation vaccines and passive immunization
To circumvent the unwanted inflammatory side effects, second-generation active vaccines have been developed and passive immunization strategies have been explored. The second-generation vaccines use small pieces of the amyloid Aß sequence to avoid activating the T-cells responsible for meningoencephalitis, while passive immunization bypasses the human immune response by directly supplying antibodies. These newer strategies have shown the same beneficial effects in transgenic mice and passive immunization has shown some promise in a subset of patients in human trials, but they have raised new questions about their effectiveness and potential new side effects. Elan/Wyeth reported preliminary results from clinical trials of their monoclonal antibody, Bapineuzimab, that demonstrated only a small benefit in a subgroup of patients who lack the apoE4 genotype. They also failed to observe an improved benefit with an increased dose of antibody and reported side effects, like a buildup of fluid in the brain. Results of active vaccination human clinical trials with second-generation vaccines remain to be reported.
Third-generation vaccines and antibodies: Thinking perpendicular to the box
Both second-generation vaccines and antibodies suffer from a common problem. They both target linear amino acid sequences found in normal human proteins (the amyloid precursor protein) and in the amyloid deposits themselves. Making antibodies against normal human proteins can cause autoimmune side effects, in which the immune system is attacking normal human cells in addition to the Alzheimer’s pathology. Fortunately, it is difficult to make antibodies against self-proteins because of immune suppression of auto antibodies. Third-generation vaccines seek to overcome these problems of autoimmune side effects and autoimmune suppression by using antibodies that target structures specific to the amyloid aggregates and that do not react with normal human proteins.
Cure Alzheimer’s Fund has been supporting two projects that seek to develop third-generation immunotherapeutics. Dr. Charles Glabe’s laboratory is developing active vaccines and monoclonal antibodies that recognize conformations of the amyloid peptide that only occur in the pathological amyloid oligomer aggregates, while Dr. Rob Moir’s lab is working on cross-linked amyloid peptides (CAPs) that are only found in disease-related aggregates. Dr. Glabe’s strategy relies on the fact that when the Aß peptide aggregates into ß-sheet oligomers, it creates new antibody recognition sites, known as epitopes, that are not found on native proteins. The surprising finding is that these oligomer-specific antibodies recognize amyloid oligomers from other diseases that involve amyloids formed from sequences unrelated to Aß. This means the same antibodies also may be effective for other amyloid-related neurodegenerative diseases, like Parkinson’s disease.
The explanation for why the antibodies are specific for amyloid oligomers that involve several individual peptide strands arranged in a sheet and yet recognize these sheets when they are formed from other amino acid sequences is simple and elegant (Figure 1). It is now known that most pathological amyloids aggregate into simple and very regular structures where the peptide strands are arranged in parallel and where the amino acid sequence is in exact register. This is like a sheet of paper upon which the same sentence is written on each line. The individual amino acids line up and down the sheet in homogeneous tracts, known as “steric zippers.” The steric zippers do not occur in normal protein structures and the oligomer-specific antibodies are thought to recognize these steric zipper patterns on the surface of the sheets. Since all proteins are made up using the same 20 amino acids, any sequence in this parallel, in-register structure gives rise to the same steric zippers regardless of the linear sequence, which can explain why the antibodies recognize the oligomers formed by different proteins.
Dr. Moir’s group is working on CAPs, where Aß is cross-linked by oxidation of a tyrosine residue at position 10 of the peptides’ sequence. Aß is oxidized after it is produced from the amyloid precursor protein as a consequence of the abnormally high level of oxidative activity in a brain with AD and the peptides’ propensity to bind redox active metals. Excessive CAPs generation is associated with the disease state and is not a normal feature of Aß biology. The cross-linking at tyrosine 10 that gives rise to CAPs may serve to align the peptides in a parallel, in-register fashion and promote the generation of still-larger oligomeric aggregates that display steric zippers on their surface.
Dr. Moir and Dr. Rudy Tanzi’s labs found that natural antibodies to CAPs are reduced in the blood of patients with AD. More recently, evidence published by Tony Weiss-Coray’s group at Stanford University supports the idea that antibodies that recognize steric zippers and CAPs may be important for protecting against Alzheimer’s disease. The levels of these antibodies that target the zippers and CAPs were among the highest in young, normal humans; levels dropped with aging and with AD. Furthermore, the results of a recent study supported by Baxter Biosciences of patients that received human antibodies purified from normal individuals (IVIg) reported that antibody treatment reduced the risk of being diagnosed with AD by 42 percent over the five-year study period. This is one of the most remarkable reports of prevention of AD by any therapy. Although the normal human antibodies that target amyloid primarily recognize the steric zippers and CAPs, these antibodies are present at relatively low levels. It is reasonable to imagine that an even greater protective effect might be achieved by boosting the levels of these protective antibodies by either active vaccination or passive immunization.
Figure 1 shows how the same steric zipper patterns are formed on parallel, in-register oligomers from completely different sequences. A segment of the Aß sequences is shown in the upper left corner and a random sequence is shown in the upper right. Each amino acid is designated by a capital letter. Typical antibodies recognize the linear sequence (from left to right) indicated in the horizontal boxes, which is unique to each sequence. When the peptides aggregate to form pathological oligomers, they line up in a parallel, in-register fashion, shown below. This gives rise to steric zippers that run up and down the sheet perpendicular to the sequence, shown in vertical boxes. Aggregation-dependent, disease-specific antibodies recognize the steric zippers from many different amyloid sequences. Zippers from F and V amino acids are shown in boxes, but there are potentially 20 different zippers; one for each of the 20 amino acids.
The fact that a completely random sequence can form the same type of steric zipper as is found in Aß amyloid in Alzheimer’s disease means we can use a non-human, random peptide sequence as a vaccine to produce a protective immune response that has a very low potential for autoimmune side effects. Vaccines based on non-human peptides, like diphtheria and pertussis toxin, are so safe they routinely are given to infants. There is no reason to expect that a vaccine for AD that targets the disease-specific steric zippers wouldn’t be as safe and free of side effects. A goal of the research funded by Cure Alzheimer’s Fund is to do the preclinical investigations that are a necessary prelude to getting these third-generation vaccines and monoclonal antibodies that target disease-specific epitopes into human clinical trials.
|
Aliphatic alcohols occur
naturally in free form (component of the cuticular lipids) but more usually in esterified
(wax esters) or etherified form (glyceryl ethers). Several alcohols belong to aroma
compounds which are found in environmental or food systems (see the website: Flavornet).
They are found with normal, branched (mono- or isoprenoid), saturated or unsaturated of various chain length and sometimes with secondary or even tertiary alcoholic function. An unusual phenolic alcohol is found as a component of glycolipids in Mycobacteria. Some cyclic alcohols have been described in plants.
A classification according to the carbon-chain structure is given below.
1 - Normal-chain alcohols
The carbon chain may be fully saturated or unsaturated (with double and/or triple bonds), it may also be substituted with chlorine, bromine or sulfate groups. Some acetylenic alcohols have been also described.
Among the most common, some are listed below
9-methyl-1-hendecanol (anteisolauryl alcohol)
11-methyl-1-tridecanol (anteisomyristyl alcohol)
14-methyl-1-pentadecanol (isopalmityl alcohol)
13-methyl-1-pentadecanol (anteisopalmityl alcohol)
16-methyl-1-heptadecanol (isostearyl alcohol)
15-methyl-1-pentadecanol (anteisostearyl alcohol)
Free fatty alcohols are not commonly found in epicuticular lipids of insects, although high molecular weight alcohols have been reported in honeybees (Blomquist GJ et al., Insect Biochem 1980, 10, 313). Long-chain alcohols also have been reported in the defensive secretions of scale insects (Byrne DN et al., Physiol Entomol 1988, 13,267). Typically, insects more commonly produce lower molecular weight alcohols. Honeybees produce alcohols of 17–22 carbons, which induce arrestment in parasitic varroa mites (Donze G et al., Arch Insect Biochem Physiol 1998, 37, 129). Two female-specific fatty alcohols, docosanol (C22) and eicosanol (C20), which have been found in epicuticle of Triatoma infestans (a vector of Chagas disease in South America), are able to trigger copulation in males (Cocchiararo-Bastias L et al., J Chem Ecol 2011, 37, 246). Hexadecyl acetate is found in the web of some spiders (Pholcidae) to attract females (Schulz S, J Chem Ecol 2013, 39, 1).
Long-chain alcohols (C18, C24, C28) from the femoral glands in the male lizard Acanthodactylus boskianus play a role in chemical communication as a scent marking pheromone (Khannoon ER et al., Chemoecology 2011, 21, 143).
Various fatty alcohols are found in the waxy film that plants have over their leaves and fruits. Among them, octacosanol (C28:0) is the most frequently cited.
Policosanol is a natural mixture of higher primary aliphatic alcohols isolated and purified from sugar cane (Saccharum officinarum, L.) wax, whose main component is octacosanol but contains also hexacosanol (C26:0) and triacontanol or melissyl alcohol (C30:0). Policosanol is also extracted from a diversity of other natural sources such as beeswax, rice bran, and wheat germ (Irmak S et al., Food Chem 2006, 95, 312) but is also present in the fruits, leaves, and surfaces of plants and whole seeds. A complex policosanol mixture has been identified in peanut (Cherif AO et al., J Agric Food Chem 2010, 58, 12143). More than 20 aliphatic alcohols were identified (C14-C30) and four unsaturated alcohols (C20-24). The total policosanol content of the whole peanut samples varied from 11 to 54 mg/100 g of oil.
This mixture was shown to have cholesterol-lowering effects in rabbits (Arruzazabala ML et al., Biol Res 1994, 27, 205). Octacosanol was also able to suppress lipid accumulation in rats fed on a high-fat diet (Kato S et al., Br J Nutr 1995, 73, 433) and to inhibit platelet aggregation (Arruzazabala ML et al., Thromb Res 1993, 69, 321). The effectiveness of policosanol is still questionable but it has been approved as a cholesterol-lowering drug in over 25 countries (Carbajal D et al., Prostaglandins Leukotrienes Essent Fatty Acids 1998, 58, 61), and it is sold as a lipid-lowering supplement in more than 40 countries. More recent studies in mice question about any action on improvement of lipoprotein profiles (Dullens SPJ et al., J Lipid Res 2008, 49, 790). The authors conclude that individual policosanols, as well as natural policosanol mixtures, have no potential for reducing coronary heart disease risk through effects on serum lipoprotein concentrations. Furthermore, sugar cane policosanol at doses of 20 mg daily has shown no lipid lowering effects in subjects with primary hypercholesterolemia (Francini-Pesenti F et al., Phytother Res 2008, 22, 318). It must be noticed that, for the most part, positive results have been obtained by only one research group in Cuba. Outside Cuba, all groups have failed to validate the cholesterol-lowering efficacy of policosanols (Marinangeli C et al., Crit Rev Food Sci Nutr 2010, 50, 259). Independent studies are required before evaluating the exact value of the therapeutic benefits of that mixture.
An unsaturated analogue of octacosanol, octacosa-10, 19-dien-1-ol was synthesized and was as effective as policosanol in inhibiting the upregulation of HMGCoA reductase (Oliaro-Bosso S et al., Lipids 2009, 44, 907). This work opens promising perspectives for the design of new antiangiogenic compounds (Thippeswamy G et al., Eur J Pharmacol 2008, 588, 141). An unsaturated analogue of octacosanol, octacosa-10, 19-dien-1-ol was synthesized and was as effective as policosanol in inhibiting the upregulation of HMGCoA reductase (Oliaro-Bosso S et al., Lipids 2009, 44, 907). This work opens promising perspectives for the design of new antiangiogenic compounds.
1-Octanol and 3-octanol are components of the mushroom flavor (Maga JA, J Agric Food Chem 1981, 29, 1).
Many alcohols in the C10 to C18 range, and their short-chain acid esters are potent sex or aggregation pheromones. They are mainly found as components of specialized defensive glands, pheromone glands or glands of the reproductive system.
A series of C22 up to C28 saturated n-alcohols, with even carbon numbers predominating, and a maximum at C26 and C28, has been identified in the cyanobacterium Anabaena cylindrica (Abreu-Grobois FA et al., Phytochemistry 1977, 16, 351). Several authors have reported high contents of the 22:0 alcohol in sediments where an algal origin is plausible. For example, the major alcohol in a sample of the lacustrine Green River Shale of Eocene age is also 22:0 which comprises over 50% of the alcohols present (Sever JR et al., Science 1969, 164, 1052)
Long-chain alcohols are known as major surface lipid components (waxes) with chains from C20 up to C34 carbon atoms, odd carbon-chain alcohols being found in only low amounts. Very long-chain methyl-branched alcohols (C38 to C44) and their esters with short-chain acids were shown to be present in insects, mainly during metamorphosis. A series of long-chain alkanols (more than 23 carbon atoms) were identified in settling particles and surface sediments from Japanese lakes and were shown to be produced by planktonic bacteria being thus useful molecular markers (Fukushima K et al., Org Geochem 2005, 36, 311).
Cutin and suberin contain as monomer saturated alcohols from C16 to C22 up to 8% of the total polymers. C18:1 alcohol (oleyl alcohol) is also present.
Long-chain di-alcohols (1,3-alkanediols) have been described in the waxes which impregnate the matrix covering all organs of plants (Vermeer CP et al., Phytochemistry 2003, 62, 433). These compounds forming about 11% of the leaf cuticular waxes of Ricinus communis were identified as homologous unbranched alcohols ranging from C22 to C28 with hydroxyl group at the carbon atoms 1 and 3.
In the leaf cuticular waxes of Myricaria germanica (Tamaricaceae) several alkanediols were identified (Jetter R, Phytochemistry 2000, 55, 169). Hentriacontanediol (C31) with one hydroxyl group in the 12-position and the second one in positions from 2 to 18 is the most abundant diol (9% of the wax). Others were far less abundant : C30-C34 alkanediols with one hydroxyl group on a primary and one on a secondary carbon atom, C25-C43 b-diols and C39-C43 g-diols. Very-long-chain 1,5-alkanediols ranging from C28 to C38, with strong predominance of even carbon numbers, were identified in the cuticular wax of Taxus baccata (Wen M et al., Phytochemistry 2007, 68, 2563). The predominant diol had 32 carbon atoms (29% of the total).
Long-chain saturated C30-C32 diols occur in most marine sediments and in a few instances, such as in Black Sea sediments, they can be the major lipids (de Leeuw JW et al., Geochim Cosmochim Acta 1981, 45, 2281). A microalgal source for these compounds was discovered when Volkman JK et al. (Org Geochem 1992, 18, 131) identified C30-C32 diols in marine eustigmatophytes from the genus Nannochloropsis.
Two nonacosanetriols (7,8,11-nonacosanetriol and 10,12,15-nonacosanetriol) have been isolated from the outer fleshy layer (sarcotesta) of the Ginkgo biloba "fruit" (Zhou G et al., Chem Phys Lipids 2012, 165, 731). They exhibited slight activity of antithrombin and moderate activities of platelet aggregation in vitro.
The chief lipid fraction in the uropygial gland excretion of the domestic hen is a diester wax. The unsaponifiable fraction consists of a series of three homologous compounds, which have been named the uropygiols and identified as 2,3-alkanediols containing 22-24 carbon atoms. These fatty alcohols are esterified by saturated normal C22-C24 fatty acids (Haahti E et al., J Lipid Res 1967, 8, 131).
- Unsaturated alcohols
Some fatty alcohols have one double bond (monounsaturated). Their general formula is:
The unique double bond may be found in different positions: at the C6: i.e. cis-6-octadecen-1-ol (petroselenyl alcohol), C9 i.e cis-9-octadecen-1-ol (oleyl alcohol) and C11 i.e cis-11-octadecen-1-ol (vaccenyl alcohol). Some of these alcohols have insect pheromone activity. As an example, 11-eicosen-1-ol is a major component of the alarm pheromone secreted by the sting apparatus of the worker honeybee. In zooplankton, the cis-11-docosen-1-ol (22:1 (n-11) alcohol) is not only present in high proportion in wax esters (54 to 83%) but may be also predominant in free form (75-94% of free alcohols) in ctenophores (Graeve M et al., Mar Biol 2008, 153, 643). This presence is unexplained because pathways for conversion and catabolism of fatty alcohols in ctenophores are still unknown.
Some short-chain unsaturated alcohols are components of mushroom flavor, such as 1-octen-3-ol, t2-octen-1-ol, and c2-octen-1-ol (Maga JA, J Agric Food Chem 1981, 29, 1).
An acetoxy derivative of a 16-carbon alcohol with one double bond, gyptol (10-acetoxy cis-7-hexadecen-1-ol), was described to be a strong attractive substance secreted by a female moth (Porthetria dispar, "gypsy moth").
A fatty alcohol with two double bonds, bombykol (tr-10,cis-12-hexadecadien-1-ol), was also shown to be excreted as a very strong attractive substance by the female of silk-worm (Bombyx mori).
This first discovery of a pheromone
was made by Butenandt A et al. (Z Naturforsch 1959, 14, 283) who was
formerly Nobel laureate (in 1939) for his work in sex hormones. Another pheromone,
8,10-dodecadienol (codlemone), is secreted by the codling moth Cydia
pomonella, has been used for monitoring and mating in apple and pear
orchards in the USA and Europe. This molecule was also used to monitor the
population of the pea moth Cydia nigricana. Likewise, 7,9-dodecadienol,
the female pheromone of the European grapewine moth Lobesia botrana, was
used to control this important pest in vineyards.
A fatty triol with one double bond, avocadene (16-heptadecene-1,2,4-triol) is found in avocado fruit (Persea americana) and has been tested for anti-bacterial and anti-inflammatory properties. These properties are likely related with the curative effects of avocado described for a number of ailments (diarrhea, dysentery, abdominal pains and high blood pressure). Several others heptadecanols with one primary and two secondary alcohol functions and with one double or triple bond have been identified in the leaves of Persea americana (Lee TH et al., Food Chem 2012, 132, 921). One or two of these alcohol groups may be acetylated. These compounds may be related to the known antifungal activity of Persea leaves.
Long-chain alkenols (C37 to C39) with 2 to 4 double bonds, the reduced form of the alkenones, have been described in the benthic haptophyte Chrysotila lamellosa (Rontani JF et al., Phytochemistry 2004, 65, 117). al., 1986). C30 to C32 alcohols having one or two double bonds are significant constituents of the lipids of marine eustigmatophytes of the genus Nannochloropsis (Volkman JK et al., Org Geochem 1992, 18, 131). These microalgae could be partially the source of the alkenols found in some marine sediments.
Two chlorinated derivatives of unusual alcohols were described in a red alga Gracilaria verrucosa (Shoeb M et al., J Nat Prod 2003, 66, 1509). Both compounds have a C12 aliphatic chain chlorinated in position 2 and with one double bond at carbon 2 (compound 1 : 2-chlorododec-2-en-1-ol) or two double bonds at carbon 2 and 11 (compound 2 : 2-chlorododec-2,11-dien-1-ol).
- Acetylenic alcohols
Natural acetylenic alcohols and their derivatives have been isolated from a wide variety of plant species, fungi and invertebrates. Pharmacological studies have revealed that many of them display chemical and medicinal properties.
Monoacetylenic alcohols : were isolated from culture of Clitocybe catinus (Basidiomycetes) and the study of their structure revealed the presence of two or three hydroxyl groups (Armone A et al., Phytochemistry 2000, 53, 1087). One of these compounds is shown below.
Acetylenic alcohols have been also described in a tropical sponge Reniochaline sp (Lee HS et al., Lipids 2009, 44, 71). One of the two described in that species is shown below, it exhibited a significant growth effect against human tumor cell lines.
Polyacetylenic alcohols : Several examples with different chain lengths, unsaturation degrees, and substitution have been reported from terrestrial plants and marine organisms. Food plants of the Apiaceae (Umbellifereae) plant family such as carrots, celery and parsley, are known to contain several bioactive bisacetylenic alcohols. The main plant sources of these compounds are Angelica dahurica, Heracleum sp and Crithmum maritimum (falcarindiol, falcarinol), red ginseng (Panax ginseng) (panaxacol, panaxydol, panaxytriol), Cicuta virosa (virol A), and Clibadium sylvestre (cunaniol). All these compounds display antibiotic or cytotoxic activities.
Polyacetylenes have been isolated from the stems of Oplopanax elatus (Araliaceae), plant used in Korean and Chinese traditional medicine for anti-inflammatory and analgesic purposes (Yang MC et al., J Nat Prod 2010, 73, 801). Among the most efficient in inhibiting the formation of nitric oxide in LPS-induced cells is a seventeen-carbon diyne diol with an epoxy cycle, oploxyne A. Other parent compounds without the epoxy group were also described.
J. Sci. Food Agric. 2003,
83, 1010). Falcarinl
has potent anticancer properties on primary mammary epithelial cells and was
compared with that of
b-carotene. These results might be important in developing
new cancer treatments with simple and common vegetables. At high concentrations,
falcarinol is capable to induce contact dermatitis.
Falcarinol, a seventeen-carbon diyne fatty alcohol (1,9-heptadecadiene-4,6-diyn-3-ol), was first isolated from Falcaria vulgaris (Bohlmann F et al., Chem Ber 1966, 99, 3552) as well as from Korean ginseng (Takahashi et al., Yakugaku Zasshi 1966, 86, 1053). It was also isolated from carrot (Hansen SL et al.
Falcarinol protects the
vegetable from fungal diseases, it showed biphasic activity, having stimulatory
effects between 0.01 and 0.05 µg per ml and inhibitory effects between 1 and 10
µg per ml, whereas b-carotene
showed no effect in the concentration range 0.001–100 µg per ml (Hansen SL
et al., J Sci Food Agric 2003, 83, 1010). Experiments with
macrophage cells have shown that falcarinol (and its C-8 hydroxylated
derivative, falcarindiol) reduced nitric oxide production, suggesting that these
polyacetylenes are responsible for anti-inflammatory bioactivity (Metzger
BT et al., J Agric Food Chem 2008, 56, 3554). Falcarindiol was first
reported as phytochemicals in carrots (Daucus carota) (Bentley RK et
al., J Chem Soc 1969, 685). Besides falcarinol, falcarindiol, and
falcarindiol 3-acetate, nine additional bisacetylene alcohols were identified in
Daucus carota (Schmiech L et al., J Agric Food Chem 2009, 57, 11030).
Experiments with human intestinal cells demonstrate that aliphatic C17-polyacetylenes (panaxydol, falcarinol, falcarindiol) are potential anticancer principles of carrots and related vegetables (parsley, celery, parsnip, fennel) and that synergistic interaction between bioactive polyacetylenes may be important for their bioactivity (Purup S et al., J Agric Food Chem 2009, 57, 8290). Compounds very similar to falcarinol and extracted from Panax japonicus are potent a-glucosidase inhibitors (Chan HH et al., Phytochemistry 2010, 71, 1360). These inhibitors may potentially reduce the progression of diabetes by decreasing digestion and absorption of carbohydrates.
The water dropwort (Oenanthe crocata), which lives near streams in the Northern Hemisphere, contains a violent toxin, cicutoxin, resulting in convultions and respiratory paralysis (Uwai K et al., J Med Chem 2000, 43, 4508).
The biochemistry and bioactivity of polyacetylenes are presented in a review (Christensen LP et al., J Pharm Biomed Anal 2006, 41, 683) as well methods for the isolation and quantification of these compounds.
Many other polyacetylenic alcohols were found in primitive marine organisms,
such as sponges and ascidians. These invertebrates have no physical defenses and
thus they have developed efficient chemical mechanisms such as polyacetylenic
metabolites to resist predators and bacteria.
A C36 linear diacetylene alcohol named lembehyne was found in an Indonesian marine sponge (Haliclona sp) (Aoki S et al., Tetrahedron 2000, 56, 9945) and was later able to induce neuronal differentiation in neuroblastoma cell (Aoki S et al., Biochem Biophys Res Comm 2001, 289, 558).
Several polyacetylenic alcohols with 22 carbon atoms were isolated and identified in lipid extract from a Red Sea sponge, Callyspongia sp (Youssef DT et al., J Nat Prod 2003, 66, 679). Their physical study revealed the presence of 4 triple bonds and one, two or three double bonds. The structure of one of these Callyspongenols is given below.
Several di- and
tri-acetylenic di-alcohols with a chain of 26 up to 31 carbon atoms, named strongylodiols,
have been isolated from a Petrosia Okinawan marine sponge (Watanabe K
et al., J Nat Prod 2005, 68, 1001). Some of them have cytotoxic
Several polyacetylenic alcohols with 21 carbon atoms were isolated from a marine ascidian (Polyclinidae) and were determined to have two triple bonds combined with a conjugated dienyne group (Gavagnin M et al., Lipids 2004, 39, 681). Some of them have an additional hydroxyl group or only three double bonds. The structure of one of these molecules is given below.
Several brominated polyacetylenic diols with cytotoxic properties were isolated from a Philippines sponge Diplastrella sp (Lerch ML et al., J Nat Prod 2003, 66, 667). One of these molecules is shown below.
A comprehensive survey of acetylenic alcohols in plant and invertebrates with information on their anticancer activity has been released by Dembitsky VM (Lipids 2006, 41, 883).
- Sulfated alcohols
Long-chain di-hydroxy alcohols in which both the primary and secondary hydroxyl groups are converted to sulfate esters and one to five chlorine atoms are introduced at various places have been discovered in the alga Ochromonas danica (Chrysophyceae, Chrysophyta) where they constitute 15% of the total lipids (Haines TH, Biochem J 1969, 113, 565). An example of these chlorosulfolipids is given below. There may be several types of chlorine addition : one at R4, two at R3 and R5 or R1 and R2, five at R1 to R5 and six at R1 to R6.
Similar molecules with a 24 carbon
chain was also described in Ochromonas malhamensis (review in Dembitsky
VM et al., Prog Lipid Res 2002, 41, 315 and in Bedke DK et
al., Nat Prod Rep 2011, 28, 15). It was suggested that the
chlorosulfolipids replace sulfoquinovosyl diglyceride, since when the later is
high the former is low and vice versa. They have been associated with the human
toxicity of the mussel-derived lipids (Diarrhetic Shellfish Poisoning).
Several of these chlorosulfolipids have also been identified from more than 30 species of both freshwater and marine algae belonging to green (Chlorophyceae), brown (Phaeophyceae), red (Rhodophyceae) macrophytic algae (Mercer EI et al., Phytochemistry 1979, 18, 457), and other microalgal species (Mercer EI et al., Phytochemistry 1975, 14, 1545).
Some fatty alcohols, such as dodecanol (lauryl or dodecyl alcohol), are used for the manufacture of detergents after sulphonation (by action of SO3 gas). The salt sodium laurylsulfate (or sodium dodecylsulfate) is a detergent and strong anionic surfactant, used in biochemistry and in the composition of cosmetic products (shampoos, toothpastes).
2 - Branched-chain alcohols
In 1936, Stodola et al. characterized an optically active substance recovered on saponification of ‘‘purified waxes’’ of Mycobacterium tuberculosis, determined its global formula and proposed to name it phthiocerol (Stodola FH et al., J Biol Chem 1936, 114, 467). In 1959, after several chemical studies, its structure was determined as a mixture of C36 and C34b-glycols. It has been proposed that the term phthiocerol be reserved for the original 3-methoxy congener (phthiocerol A) and that the term phthioglycol be used to refer to the family of compounds (Onwueme KC et al., Prog Lipid Res 2005, 44, 259).
Among the most important saturated isopranols found in plants or in geological sediments are those having two (tetrahydrogeraniol), three (farnesanol), or four (phytanol) isoprenoid units. Pristanol (2,6,10,14-tetramethyl-1-pentadecanol) is tetramethylated but with only three complete isoprenoid units.
B - Unsaturated polyisoprenoids (prenols or polyprenols)
They have the following general structure :
These molecules consist of several up to more than 100 isoprene residues linked head-to-tail, with a hydroxy group at one end (a-residue) and a hydrogen atom at the other (w-end).
- 1. all trans forms : They have the following structure:
Some important members of the series are as follows:
Number of isoprene unit
Number of carbons
Long-chain trans-polyprenol (n>8) have been
characterized from Eucommia ulmoides.
Geraniol (from rose oil) is a monoterpene (2 isoprene units). It has a rose-like odor and is commonly used in perfumes and as several fruit flavors. Geraniol is also an effective mosquito repellent. Inversely, it can attract bees as it is produced by the scent glands of honey bees to help them mark nectar-bearing flowers and locate the entrances to their hives.
Farnesol is a sesquiterpene (3 isoprene units). It is the prenol that corresponds to the carbon skeleton of the simplest juvenile hormone described for the first time in insects in 1961 (Schmialek PZ, Z Naturforsch 1961, 16b, 461; Wigglesworth VB, J Insect Physiol 1961, 7, 73). It is present in many essential oils such as citronella, neroli, cyclamen, lemon grass, rose, and others. It is used in perfumery to emphasize the odors of sweet floral perfumes. It is especially used in lilac perfumes. As a pheromone, farnesol is a natural pesticide for mites. The dimorphic fungus Candida albicans has been shown to use farnesol as quorum-sensing molecule (Hornby JM et al., Appl Environ Microbiol 2001, 67, 2982).
Geranylgeraniol is a diterpene (4 isoprene units). Geraniol and geranylgeraniol are important molecules in the synthesis of various terpenes, the acylation of proteins and the synthesis of vitamins (Vitamins E and K). The covalent addition of phosphorylated derivatives of typical isoprenoids, farnesyl pyrophosphate and geranylgeranyl pyrophosphate, to proteins is a process (prenylation) common to G protein subunits. These isoprenylated proteins have key roles in membrane attachment leading to central functionality in cell biology and pathology.
Solanesol, discovered in tobacco leaves in 1956 (Rowland RL et al., J Am Chem Soc 1956, 78, 4680), may be an important precursor of the tumorigenic polynuclear aromatic hydrocarbons of smoke but is also a possible side chain for plastoquinone. Solanesol is also present in the leaves of other Solanaceae plants including tomato, potato, eggplant and pepper. It has useful medicinal properties and is known to possess anti-bacterial, anti-inflammation, and anti-ulcer activities (Khidyrova NK et al., Chem Nat Compd 2002, 38, 107). Industrially, solanesol is extracted from Solanaceae leaves (about 450 tons in 2008) and used as an intermediate in the synthesis of coenzyme Q10 and vitamin K analogues.
Spadicol was discovered in the spadix (inflorescence) of the Araceae Arum maculatum (Hemming FW et al., Proc R Soc London 1963, 158, 291). Its presence is likely related to its presence in the ubiquinone as the side-chain.
Phytol is a partially saturated diterpene, a monounsaturated derivative of geranylgeraniol which is part of the chlorophyll molecule :
- 2. ditrans-polycis-prenols, such as the bacteria prenol and betulaprenol types. In general, bacteria, as all prokaryotic cells, possess ditrans-polycis-prenols containing between 10 and 12 units, the most abundant being undecaprenol (trivial name bactoprenol).
Betulaprenols with n = 3-6 were
isolated from the woody tissue of Betula verrucosa (Wellburn AR et
al., Nature 1966, 212, 1364), and bacterial polyprenol with n = 8 were
isolated from Lactobacillus plantarum (Gough DP et al., Biochem J
1970, 118, 167). Betulaprenol-like species with 14 to 22 isoprene units have
been discovered in leaves of Ginkgo biloba (Ibata K et al., Biochem J
1983, 213, 305).
Polyisoprenoid alcohols are accumulated in the cells most often as free alcohols and/or esters with carboxylic acids. A fraction of polyisoprenoid phosphates has also been detected, and this form is sometimes predominant in dividing cells and Saccharomyces cerevisiae (Adair WL et al., Arch Biochem Biophys 1987, 259, 589).
- 3. tritrans-polycis-prenols, of the ficaprenol type.Some of the earliest samples were obtained from Ficus elastica, giving rise to the trivial names ficaprenol-11 and ficaprenol-12 (Stone KJ et al. Biochem J 1967, 102, 325).
In plants, the diversity of polyprenols is much broader, their chain length covers the broad spectrum of compounds ranging from 6 up to 130 carbon atoms (Rezanka T et al., J Chromatogr A 2001, 936, 95).
- 4. dolichol types, the a
terminal is saturated.
Most eukaryotic cells contain one type of polyisoprenoid alcohols with one a-saturated isoprenoid unit (2,3-dihydro polycis-prenols) which have been called dolichol by Pennock JF et al. (Nature 1960, 186, 470), a derivative of prenols. Most of these carry two trans units at the w-end of the chain.
Dolichols (fro the Greek dolikos: long) have the general structure :
Dolichols isolated from yeast or
animal cells consist mainly of seven to eight compounds, those with 16, 18, or
19 isoprenoid units being the most abundant (Ragg SS, Biochem Biophys Res
Comm 1998, 243, 1).
Dolichol amount was shown to be increased in the brain gray matter of elderly (Pullarkat RK
et al., J Biol Chem 1982, 257, 5991). Dolichols with 19, 22 and 23
isoprenoid units were described as early as 1972 in marine invertebrates (Walton
MJ et al., Biochem J 1972, 127, 471). Furthermore, the pattern of their distribution may be
considered as a chemotaxonomic criterion. It has been reported that a high
proportion of dolichols is esterified to fatty acids. As an example, 85-90% of
dolichols are esterified in mouse testis (Potter
J et al., Biochem Biophys Res Comm 1983, 110, 512). In addition, dolichyl
dolichoate has been found in bovine thyroid (Steen
L. et al., Biochim Biophys Acta, 1984, 796, 294).
They are well known for their important role as glycosyl carrier in a phosphorylated form in the synthesis of polysaccharides and glycoproteins in yeast cells, and animals. Dolichyl phosphate is an obligatory intermediate in the biosynthesis of N-glycosidically linked oligosaccharide chains. Conversely, they have been identified as the predominant isoprenoid form in roots (Skorupinska-Tudek K et al., Lipids 2003, 38, 981) and in mushroom tissue (Wojtas M et al., Chem Phys Lipids 2004, 130, 109). Similar compounds (ficaprenols) have the same metabolic function in plants.
The repartition of the various types of polyisoprenoid alcohols between plants and animals and their metabolism have been extensively discussed (Swiezewska E et al., Prog Lipid Res 2005, 44, 235). Biosynthesis of polyisoprenoid alcohols and their biological role have been reviewed in 2005 (Swiezewska E et al., Prog Lipid Res 2005, 44, 235).
3 - Phenolic alcohols
Among the simple phenolic alcohols, monolignols are the source materials for biosynthesis of both lignans and lignin. The starting material for production of monolignols (phenylpropanoid) is the amino acid phenylalanine. There are two main monolignols: coniferyl alcohol and sinapyl alcohol. Para-coumaryl alcohol is similar to conipheryl alcohol but without the methoxy group.
Conipheryl alcohol is found in both gymnosperm and angiosperm
plants. Sinapyl alcohol and para-coumaryl alcohol, the other two lignin
monomers, are found in angiosperm plants and grasses. Conipheryl esters (conypheryl
8-methylnonanoate) have been described in the fruits of the pepper, Capsicum
baccatum (Kobata K et al., Phytochemistry 2008, 69, 1179). These
compounds displayed an agonist activity for transient receptor potential
vanilloid 1 (capsaicin receptor) as the well known capsaicinoids present in
these plant species.
Complex phenolic alcohols (phenolphthiocerol) were shown to be components of Mycobacterium glycolipids which are termed glycosides of phenolphthiocerol dimycocerosate (Smith DW et al., Nature 1960, 186, 887) belonging to the large family of "mycosides". The chain length differs according to the homologues, 18 and 20 carbon atoms in mycosides A, and B, respectively. One of these phenolphthiocerols is shown below.
An analogue component but with a ketone group instead of the methoxy group, a phenolphthiodiolone, has been detected in mycoside A (Fournie JJ et al., J Biol Chem 1987, 262, 3174).
An alcohol with a furan group, identified as 3-(4-methylfuran-3-yl)propan-1-ol, has been isolated from a fungal endophyte living in a plant, Setaria viridis (Nakajima H et al., J Agric Food Chem 2010, 58, 2882). That compound was found to have a repellent effect on an insect, Eysarcoris viridis, which is a major pest of rice.
Some cyclic alkyl polyols have been reported in plants. Among the various form present in an Anacardiaceae, Tapirira guianensis, from South America, two displayed anti-protozoal (Plasmodium falciparum) and anti-bacterial (Staphylococcus spp) activities (Roumy V et al., Phytochemistry 2009, 70, 305). The structure shown below is that of a trihydroxy-alcohol containing a cyclohexene ring.
As emphasized by the
authors, external application of the active plant extract or of the
purified compounds could represent an accessible therapeutic alternative to
classical medicine against leishmaniasis.
aldehydes are found in free form, but also in the form of vinyl ether (known as alk-1-enyl
ether) integated in glycerides and phospholipids (plasmalogens).
The free aldehydes can be as fatty acids saturated or unsaturated. They have a general formula CH3(CH2)nCHO with n=6 to 20 or greater. The most common is palmitaldehyde (hexadecanal) with a 16 carbon chain. Normal monoenoic aldehydes are analogous to the monoenoic fatty acids.
It must be noticed that an aldehyde function may be found at a terminal (w) position while an acid function is present at the other end of the carbon chain (oxo fatty acids). These compounds have important signaling properties in plants.
Long-chain aldehydes have been described in the waxes which impregnate the matrix covering all organs of plants (Vermeer CP et al., Phytochemistry 2003, 62, 433). These compounds forming about 7% of the leaf cuticular waxes of Ricinus communis were identified as homologous unbranched aldehydes ranging from C22 to C28 with a hydroxyl group at the carbon 3. Long-chain 5-hydroxyaldehydes with chain lengths from C24 to C36, the C28 chain being the most abundant, were identified in the cuticular wax of Taxus baccata needles (Wen M et al., Phytochemistry 2007, 68, 2563). Long-chain aliphatic aldehydes with chain-length from C22 to C30 are also present in virgin olive oils, hexacosanal (C26) being the most abundant aldehyde (Perez-Camino MC et al., Food Chem 2012, 132, 1451).
Aldehydes may be produced during decomposition of fatty acid hydroperoxides following a peroxidation attack. Several aldehydes (hexanal, heptanal..) belong to aroma compounds which are found in environmental or food systems (see the website: Flavornet). Aldehydes (mono- or di-unsaturated) with 5 to 9 carbon atoms are produced by mosses (Bryophyta) after mechanical wounding (Croisier E et al., Phytochemistry 2010, 71, 574). It was shown that they were produced by oxidative fragmentation of polyunsaturated fatty acids (C18, C20). Trans-2-nonenal is an unsaturated aldehyde with an unpleasant odor generated during the peroxidation of polyunsaturated fatty acids. It participates to body odor and is found mainly covalently bound to protein in vivo (Ishino K et al., J Biol Chem 2010, 285, 15302).
Fatty aldehydes may be determined easily by TLC or gas liquid chromatography (follow that link). The most common method for the determination of aldehydes involves derivatization with an acidic solution of 2,4-dinitrophenylhydrazine to form corresponding hydrazones followed by HPLC separation and UV–VIS detection. An optimized derivatization procedure for the determination of aliphatic C1-C10 aldehydes has been described (Stafiej A et al., J Biochem Biophys Meth 2006, 69, 15).
Other short-chain aldehydes (octadienal, octatrienal, heptadienal) are produced via a lipoxygenase-mediated pathway from polyunsaturated fatty acids esterifying glycolipids in marine diatoms (D'Ippolito G et al., Biochim Biophys Acta 2004, 1686, 100). Heighteen species of diatoms have been shown to release unsaturated aldehydes (C7:2, C8:2, C8:3, C10:2, and C10:3) upon cell disruption (Wichard T et al., J Chem Ecol 2005, 31, 949).
Several short-chain aldehydes were shown to induce deleterious effects on zooplankton crustaceans and thus limiting the water secondary production (birth-control aldehydes) (D'Ippolito G et al., Tetrahedron Lett 2002, 43, 6133). In laboratory experiments, three decatrienal isomers produced by various diatoms were shown to arrest embryonic development in copepod and sea urchins and have antiproliferative and apoptotic effects on carcinoma cells (Miralto A et al., Nature 1999, 402, 173). Later, the copepod recruitment in blooms of planktonic diatom was shown to be suppressed by ingestion of dinoflagellate aldehydes (Nature 2004, 429, 403). It was demonstrated that diatoms can accurately sense a potent 2E,4E/Z-decadienal and employ it as a signaling molecule to control diatom population sizes (Vardi A et al., PLoS Biol 2006, 4, e60). This aldehyde triggered a dose-dependent calcium transient that has derived from intracellular store. Subsequently, calcium increase led to nitric oxide (NO) generation by a calcium-dependent NO synthase-like activity, resulting in cell death in diatoms.
Myeloperoxidase-derived chlorinated aldehydes with plasmalogens has been reported. Thus, the vinyl-ether bond of plasmalogens is susceptible to attack by HOCl to yield a lysophospholipid and an
Both these chloro-fatty aldehydes have been detected in neutrophils activated
with PMA (Thukkani
AK et al., J Biol Chem 2002, 277, 3842) and in human atherosclerotic
AK et al., Circulation 2003, 108, 3128). Furthermore,
2-chlorohexadecanal was shown to induce COX-2 expression in human coronary
artery endothelial cells (Messner
MC et al., Lipids 2008, 43, 581). These data suggest that
2-chlorohexadecanal and possibly its metabolite 2-chlorohexadecanoic acid, both produced during leukocyte activation, may alter vascular endothelial cell
function by upregulation of COX-2 expression.
Long after the demonstration of the presence of iodinated lipids in thyroid (besides iodinated aminoacids), it was shown that the major iodinated lipid formed in thyroid when incubated in vitro with iodide was 2-iodohexadecanal (Pereira A et al, J Biol Chem 1990, 265, 17018). In rat and dog thyroid, 2-iodooctadecanal was determined to be more abundant that the 16-carbon aldehyde. These compounds, which are thought to play a role in the regulation of thyroid function, were recently shown to be formed by the attack of reactive iodine on the vinyl ether group of PE plasmalogen. This attack generates an unstable iodinated derivative which breaks into lysophosphatidylethanolamine and 2-iodo aldehydes (Panneels V et al, J Biol Chem 1996, 271, 23006).
In some bacteria, aldehyde analogs of cyclopropane fatty acids were described.
Several fatty aldehydes are known to have pheromone functions. Studies in African and Asian countries have shown that the use of 10,12-hexadecadienal could be effective for control of the spiny bollworm Earias insulana, a cotton pest. The sex pheromone of the navel orange worm, Amyelois transitella, 11,13-hexadecadienal, is usually used in the control of this citric pest.
A branched saturated aldehyde (3,5,9-trimethyldodecenal, stylopsal) has been identified as a female-produced sex pheromone in Stylops (Strepsiptera), an entomophagous endoparasitic insect (Cvacka J et al., J Chem Ecol 2012, 38, 1483).
Several isoprenoid aldehydes are important in insect biology as pheromones and in botany as volatile odorous substances. Some examples are given below:
These three terpenic aldehydes are produced in large amounts by
the mandibular glands of ants and may function as defensive repellents (Regnier
FE et al., J Insect Physiol 1968, 14, 955). In contrast, the same molecules
have a role of recruiting pheromones in honeybees
Citral, a mixture of the tautomers geranial (trans-citral) and neral (cis-citral) is a major component (more than 60%) of the lemongrass (Cymbopogon flexuosus) oil. Lemongrass is widely used, particularly in Southeast Asia and Brazil, as a food flavoring, as a perfume, and for its medicinal properties (analgesic and anti-inflammatory). It was found that citral is a major suppressor of COX-2 expression and an activator of PPARa and g (Katsukawa M et al., Biochim Biophys Acta 2010, 1801, 1214).
It was demonstrated that damaged leaves released 2-hexenal, among other C6-volatile aldehydes, produced from the catalytic activity of hydroperoxide lyase (Turlings TC et al., Proc Ntl Acad Sci USA 1995, 92, 4169). These compounds, considered as signal molecules, can trigger several responses in neighboring plants and may also act as antimicrobial agents (Farmer EE, Nature 2001, 411, 854).
One important constituent of this group of aldehydes is retinal, one active form of vitamin A involved in the light reception of animal eyes but also in bacteria as a component of the proton pump.
Retinal exist in two forms, a cis and a trans isomer. On illumination with
white light, the visual pigment, rhodopsin is converted to a mixture of a protein (opsin)
and trans-retinal. This isomer must be transformed into the cis form by retinal isomerase
before it combines again with opsin (dark phase). Both isomers can be reduced to retinol (vitamin A) by a NADH dependent alcohol dehydrogenase.
Retinol is stocked in retina mainly in an acylated form.
Cinnamic aldehyde (cinnamaldehyde) is the key flavor compound in cinnamon essential oil extracted from Cinnamomum zeylanicum and Cinnamomum cassia bark. Investigations have revealed than that benzyl aldehyde activates the Nrf2-dependent antioxidant response in human epithelial colon cells (Wondrak GT et al., Molecules 2010, 15, 3338). Cinnamic aldehyde may therefore represent a precious chemopreventive dietary factor targeting colorectal carcinogenesis.
|
When it comes to solar power, the real dilemmas are efficiency and cost. On the one hand, efficiency has steadily improved over the last couple decades to the point where it’s approaching the utility prices of other power generation methods. Exotic technologies promise even greater gains. However, the price of solar-generated power still remains at least five times as expensive as coal-power, the chief source of power in the U.S. (compared to the leading candidate, nuclear, which is approximately 1.5 to 2 times as expensive).
While solar adoption from a cost standpoint is unattractive, there's much debate over whether commercial adoption is needed to spur further research to propel solar into the realm of cost competitiveness. While many nations like the U.S. and China have modestly taken this position, adopting solar at a moderate rate, one nation has fallen head over heels for solar -- Spain.
Spain is allowing solar and wind power plants to charge as much as 10 times the rates of coal power plants, making it possible for solar power installations to earn utilities big money. On average, recent rate increases have raised solar charges to over 7 times the rates of coal or natural gas rates. The costs are added onto consumers' power bills.
The results are mixed; while Spanish power bills are at record highs, the number of deployments is soaring. Spain has 14 GW of solar power, or the equivalent capacity of nine average nuclear reactors, under construction -- the most of any nation. Florida’s FPL Group Inc. and French Electricite de France SA are among the many jumping to build in Spain.
Gabriel Calzada, an economist and professor at Rey Juan Carlos University in Madrid, states, "Who wouldn’t want to enter a business that’s paid many times more than the market rate, and where the customer is guaranteed for life?"
By 2009, 42 percent of Spaniards energy bills -- approximately 95 euros ($127) on average -- will be provided by alternative energy. Spanish law requires power distributors to buy all clean energy produced in the first 25 years of the plants' lives. The government also recently raised the rate of Spain believes this sacrifice will pay off as fossil fuel resources become depleted and emissions standards tighten.
Karsten von Blumenthal, an industrial analyst at Hamburg-based SES Research GmbH states, "The guarantee is more attractive than what other countries offer. Actually the U.S. has better space for solar, in the deserts of California and Nevada."
The U.S. meanwhile is also advancing thanks in part to President Obama's solar initiatives passed earlier this year as part of the federal stimulus legislation. Over 6 GW of capacity is planned for the U.S.
Fred Morse, an official at the Washington- based Solar Energy Industries Association trade group and author of the first report to the White House on solar power (1969), says that the U.S. needs to adopt more incentives if it hopes to catch Spain. He states, "The incentives, if implemented promptly and effectively, should greatly facilitate the financing of these plants."
One promising benefit of the Spanish solar boom is that it is increasing the number of plants utilizing new, potentially more efficient technologies like solar thermal or sterling engines. Spain is limiting the number of photovoltaic plants (solar panel-based designs), but is giving out unlimited licenses for solar thermal and other alternative plants.
quote: 1 - Solar Tech is improving very quickly. In another 3~5 years, it SHOULD be just as cheap as coal.
quote: 4 - If many people buy their own, it reduces the drain on the electrical system
quote: 2 - If its cheap, unlike a power plant (coar / nuke) Anyone can buy it and install it on their roof or their yard. Remember on DT - they're's tech that'll allow your windows to be solar collectors.
quote: 5 - if everyone has one, it can help re-charge their electric cars. Because AS OF NOW, eletric cars STILL require power from Coal / Nuke plants which generate pollution / waste.
quote: Assuming 0,45 cents per KW Here,
quote: But the pleasure to not give the moneyto them... doesn´t have a price...
|
/* -*- mode: c++; c-basic-offset: 4; indent-tabs-mode: nil -*-
this file is part of rcssserver3D
Fri May 9 2003
Copyright (C) 2002,2003 Koblenz University
Copyright (C) 2003 RoboCup Soccer Server 3D Maintenance Group
$Id$
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; version 2 of the License.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#ifndef AGENTSYNCEFFECTOR_H
#define AGENTSYNCEFFECTOR_H
#include <oxygen/agentaspect/effector.h>
class AgentSyncEffector : public oxygen::Effector
{
public:
AgentSyncEffector();
virtual ~AgentSyncEffector();
/** realize the sync action asap */
virtual bool Realize(boost::shared_ptr<oxygen::ActionObject> action);
/** returns the name of the predicate this effector implements. */
virtual std::string GetPredicate() { return "syn"; }
/** constructs an Actionobject, describing a predicate */
virtual boost::shared_ptr<oxygen::ActionObject>
GetActionObject(const oxygen::Predicate& predicate);
/** setup the reference to the agent aspect node */
virtual void OnLink();
/** remove the reference to the agent aspect node */
virtual void OnUnlink();
private:
boost::shared_ptr<oxygen::AgentAspect> mAgentAspect;
};
DECLARE_CLASS(AgentSyncEffector);
#endif // AGENTSYNCEFFECTOR_H
|
#include "liver/fill_liver_volume.hpp"
#include "shape/box.hpp"
#define CATCH_CONFIG_MAIN
#include <catch.hpp>
#include <fmt/ostream.h>
#include <range/v3/all.hpp>
using namespace jhmi;
TEST_CASE( "Lobules two ways", "[fill_liver_volume]" ) {
auto shape = box{m3{}, dbl3{1,1,1} * 18_mm};
auto lob1 = std::vector<m3>{};
for_lobule(shape, [&](m3 const& pt, int3 const&) {
lob1.push_back(pt);
});
auto lob2 = lobules_in(shape);
RANGES_FOR(auto z, ranges::view::zip(lob1, lob2))
REQUIRE(distance(z.first - z.second).value() < 1e-16);
}
#if 0
TEST_CASE( "Portal tracts two ways", "[fill_liver_volume]" ) {
auto shape = box{m3{}, dbl3{1,1,1} * 18_mm};
auto tract1 = std::vector<m3>{};
for_portal_tract(shape, [&](m3 const& pt, int3 const&) {
tract1.push_back(pt);
});
auto tract2 = tracts_in(shape);
RANGES_FOR(auto z, ranges::view::zip(tract1, tract2))
REQUIRE(distance(z.first - z.second).value() < 1e-16);
}
#endif
TEST_CASE( "Lobule near", "[fill_liver_volume]") {
auto shape = box{m3{}, dbl3{1,1,1} * 18_mm};
auto lobules = std::map<int3,m3>{};
for_lobule(shape, [&](m3 const& pt, int3 const& ipt) {
lobules[ipt] = pt;
});
auto pts = ranges::view::ints(2,35);
RANGES_FOR(double i, pts) {
auto near = find_near_lobule(shape, i * dbl3{1,1,1} * .5_mm);
REQUIRE(near);
REQUIRE(distance(near->first - lobules[near->second]).value() < 1e-16);
}
}
TEST_CASE( "Tract near", "[fill_liver_volume]") {
auto shape = box{m3{}, dbl3{1,1,1} * 18_mm};
auto tracts = std::map<int3,m3>{};
for_portal_tract(shape, [&](m3 const& pt, int3 const& ipt) {
tracts[ipt] = pt;
});
auto pts = ranges::view::ints(2,35);
RANGES_FOR(double i, pts) {
auto near = find_near_tract(shape, i * dbl3{1,1,1} * .5_mm);
REQUIRE(near);
REQUIRE(distance(near->first - tracts[near->second]).value() < 1e-16);
}
}
|
//===--- TypeRef.h - Swift Type References for Reflection -------*- C++ -*-===//
//
// This source file is part of the Swift.org open source project
//
// Copyright (c) 2014 - 2016 Apple Inc. and the Swift project authors
// Licensed under Apache License v2.0 with Runtime Library Exception
//
// See http://swift.org/LICENSE.txt for license information
// See http://swift.org/CONTRIBUTORS.txt for the list of Swift project authors
//
//===----------------------------------------------------------------------===//
//
// Implements the structures of type references for property and enum
// case reflection.
//
//===----------------------------------------------------------------------===//
#ifndef SWIFT_REFLECTION_TYPEREF_H
#define SWIFT_REFLECTION_TYPEREF_H
#include "swift/Basic/Demangle.h"
#include "llvm/Support/Casting.h"
#include <iostream>
class NodePointer;
namespace swift {
namespace reflection {
class ReflectionContext;
using llvm::cast;
enum class TypeRefKind {
#define TYPEREF(Id, Parent) Id,
#include "swift/Reflection/TypeRefs.def"
#undef TYPEREF
};
class TypeRef;
using TypeRefPointer = std::shared_ptr<TypeRef>;
using ConstTypeRefPointer = std::shared_ptr<const TypeRef>;
using TypeRefVector = std::vector<TypeRefPointer>;
using ConstTypeRefVector = const std::vector<TypeRefPointer>;
class TypeRef : public std::enable_shared_from_this<TypeRef> {
TypeRefKind Kind;
public:
TypeRef(TypeRefKind Kind) : Kind(Kind) {}
TypeRefKind getKind() const {
return Kind;
}
void dump() const;
void dump(std::ostream &OS, unsigned Indent = 0) const;
};
class BuiltinTypeRef final : public TypeRef {
std::string MangledName;
public:
BuiltinTypeRef(std::string MangledName)
: TypeRef(TypeRefKind::Builtin), MangledName(MangledName) {}
static std::shared_ptr<BuiltinTypeRef> create(std::string MangledName) {
return std::make_shared<BuiltinTypeRef>(MangledName);
}
std::string getMangledName() const {
return MangledName;
}
static bool classof(const TypeRef *TR) {
return TR->getKind() == TypeRefKind::Builtin;
}
};
class NominalTypeRef final : public TypeRef {
std::string MangledName;
public:
NominalTypeRef(std::string MangledName)
: TypeRef(TypeRefKind::Nominal), MangledName(MangledName) {}
static std::shared_ptr<NominalTypeRef> create(std::string MangledName) {
return std::make_shared<NominalTypeRef>(MangledName);
}
std::string getMangledName() const {
return MangledName;
}
static bool classof(const TypeRef *TR) {
return TR->getKind() == TypeRefKind::Nominal;
}
};
class BoundGenericTypeRef final : public TypeRef {
std::string MangledName;
TypeRefVector GenericParams;
public:
BoundGenericTypeRef(std::string MangledName, TypeRefVector GenericParams)
: TypeRef(TypeRefKind::BoundGeneric),
MangledName(MangledName),
GenericParams(GenericParams) {}
static std::shared_ptr<BoundGenericTypeRef>
create(std::string MangledName, TypeRefVector GenericParams) {
return std::make_shared<BoundGenericTypeRef>(MangledName, GenericParams);
}
std::string getMangledName() const {
return MangledName;
}
TypeRefVector getGenericParams() {
return GenericParams;
}
ConstTypeRefVector getGenericParams() const {
return GenericParams;
}
static bool classof(const TypeRef *TR) {
return TR->getKind() == TypeRefKind::BoundGeneric;
}
};
class TupleTypeRef final : public TypeRef {
TypeRefVector Elements;
public:
TupleTypeRef(TypeRefVector Elements)
: TypeRef(TypeRefKind::Tuple), Elements(Elements) {}
static std::shared_ptr<TupleTypeRef> create(TypeRefVector Elements) {
return std::make_shared<TupleTypeRef>(Elements);
}
TypeRefVector getElements() {
return Elements;
};
ConstTypeRefVector getElements() const {
return Elements;
};
static bool classof(const TypeRef *TR) {
return TR->getKind() == TypeRefKind::Tuple;
}
};
class FunctionTypeRef final : public TypeRef {
TypeRefPointer Input;
TypeRefPointer Result;
public:
FunctionTypeRef(TypeRefPointer Input, TypeRefPointer Result)
: TypeRef(TypeRefKind::Function), Input(Input), Result(Result) {}
static std::shared_ptr<FunctionTypeRef> create(TypeRefPointer Input,
TypeRefPointer Result) {
return std::make_shared<FunctionTypeRef>(Input, Result);
}
TypeRefPointer getInput() {
return Input;
};
ConstTypeRefPointer getInput() const {
return Input;
};
TypeRefPointer getResult() {
return Result;
}
ConstTypeRefPointer getResult() const {
return Result;
}
static bool classof(const TypeRef *TR) {
return TR->getKind() == TypeRefKind::Function;
}
};
class ProtocolTypeRef final : public TypeRef {
std::string ModuleName;
std::string Name;
public:
ProtocolTypeRef(std::string ModuleName, std::string Name)
: TypeRef(TypeRefKind::Protocol), ModuleName(ModuleName), Name(Name) {}
static std::shared_ptr<ProtocolTypeRef>
create(std::string ModuleName, std::string Name) {
return std::make_shared<ProtocolTypeRef>(ModuleName, Name);
}
std::string getName() const {
return Name;
}
std::string getModuleName() const {
return ModuleName;
}
static bool classof(const TypeRef *TR) {
return TR->getKind() == TypeRefKind::Protocol;
}
};
class ProtocolCompositionTypeRef final : public TypeRef {
TypeRefVector Protocols;
public:
ProtocolCompositionTypeRef(TypeRefVector Protocols)
: TypeRef(TypeRefKind::ProtocolComposition), Protocols(Protocols) {}
static std::shared_ptr<ProtocolCompositionTypeRef>
create(TypeRefVector Protocols) {
return std::make_shared<ProtocolCompositionTypeRef>(Protocols);
}
TypeRefVector getProtocols() {
return Protocols;
}
ConstTypeRefVector getProtocols() const {
return Protocols;
}
static bool classof(const TypeRef *TR) {
return TR->getKind() == TypeRefKind::ProtocolComposition;
}
};
class MetatypeTypeRef final : public TypeRef {
TypeRefPointer InstanceType;
public:
MetatypeTypeRef(TypeRefPointer InstanceType)
: TypeRef(TypeRefKind::Metatype), InstanceType(InstanceType) {}
static std::shared_ptr<MetatypeTypeRef> create(TypeRefPointer InstanceType) {
return std::make_shared<MetatypeTypeRef>(InstanceType);
}
TypeRefPointer getInstanceType() {
return InstanceType;
}
ConstTypeRefPointer getInstanceType() const {
return InstanceType;
}
static bool classof(const TypeRef *TR) {
return TR->getKind() == TypeRefKind::Metatype;
}
};
class ExistentialMetatypeTypeRef final : public TypeRef {
TypeRefPointer InstanceType;
public:
ExistentialMetatypeTypeRef(TypeRefPointer InstanceType)
: TypeRef(TypeRefKind::ExistentialMetatype), InstanceType(InstanceType) {}
static std::shared_ptr<ExistentialMetatypeTypeRef>
create(TypeRefPointer InstanceType) {
return std::make_shared<ExistentialMetatypeTypeRef>(InstanceType);
}
TypeRefPointer getInstanceType() {
return InstanceType;
}
ConstTypeRefPointer getInstanceType() const {
return InstanceType;
}
static bool classof(const TypeRef *TR) {
return TR->getKind() == TypeRefKind::ExistentialMetatype;
}
};
class GenericTypeParameterTypeRef final : public TypeRef {
const uint32_t Index;
const uint32_t Depth;
public:
GenericTypeParameterTypeRef(uint32_t Index, uint32_t Depth)
: TypeRef(TypeRefKind::GenericTypeParameter), Index(Index), Depth(Depth) {}
static std::shared_ptr<GenericTypeParameterTypeRef>
create(uint32_t Index, uint32_t Depth) {
return std::make_shared<GenericTypeParameterTypeRef>(Index, Depth);
}
uint32_t getIndex() const {
return Index;
}
uint32_t getDepth() const {
return Depth;
}
static bool classof(const TypeRef *TR) {
return TR->getKind() == TypeRefKind::GenericTypeParameter;
}
};
class DependentMemberTypeRef final : public TypeRef {
TypeRefPointer Member;
TypeRefPointer Base;
public:
DependentMemberTypeRef(TypeRefPointer Member, TypeRefPointer Base)
: TypeRef(TypeRefKind::DependentMember), Member(Member), Base(Base) {}
static std::shared_ptr<DependentMemberTypeRef>
create(TypeRefPointer Member, TypeRefPointer Base) {
return std::make_shared<DependentMemberTypeRef>(Member, Base);
}
TypeRefPointer getMember() {
return Member;
}
ConstTypeRefPointer getMember() const {
return Member;
}
TypeRefPointer getBase() {
return Base;
}
ConstTypeRefPointer getBase() const {
return Base;
}
static bool classof(const TypeRef *TR) {
return TR->getKind() == TypeRefKind::DependentMember;
}
};
class AssociatedTypeRef final : public TypeRef {
std::string Name;
public:
AssociatedTypeRef(std::string Name)
: TypeRef(TypeRefKind::Associated), Name(Name) {}
static std::shared_ptr<AssociatedTypeRef> create(std::string Name) {
return std::make_shared<AssociatedTypeRef>(Name);
}
std::string getName() const {
return Name;
}
static bool classof(const TypeRef *TR) {
return TR->getKind() == TypeRefKind::Associated;
}
};
template <typename ImplClass, typename RetTy = void, typename... Args>
class TypeRefVisitor {
public:
RetTy visit(const TypeRef *typeRef, Args... args) {
switch (typeRef->getKind()) {
#define TYPEREF(Id, Parent) \
case TypeRefKind::Id: \
return static_cast<ImplClass*>(this) \
->visit##Id##TypeRef(cast<Id##TypeRef>(typeRef), \
::std::forward<Args>(args)...);
#include "swift/Reflection/TypeRefs.def"
}
}
};
class PrintTypeRef : public TypeRefVisitor<PrintTypeRef, void> {
std::ostream &OS;
unsigned Indent;
std::ostream &indent(unsigned Amount) {
for (unsigned i = 0; i < Amount; ++i)
OS << ' ';
return OS;
}
std::ostream &printHeader(std::string Name) {
indent(Indent) << '(' << Name;
return OS;
}
template<typename T>
std::ostream &printField(std::string name, const T &value) {
if (!name.empty())
OS << " " << name << "=" << value;
else
OS << " " << value;
return OS;
}
void printRec(const TypeRef *typeRef) {
OS << "\n";
if (typeRef == nullptr)
OS << "<<null>>";
else {
Indent += 2;
visit(typeRef);
Indent -=2;
}
}
public:
PrintTypeRef(std::ostream &OS, unsigned Indent)
: OS(OS), Indent(Indent) {}
void visitBuiltinTypeRef(const BuiltinTypeRef *B) {
printHeader("builtin");
auto demangled = Demangle::demangleTypeAsString(B->getMangledName());
printField("", demangled);
OS << ')';
}
void visitNominalTypeRef(const NominalTypeRef *N) {
printHeader("nominal");
auto demangled = Demangle::demangleTypeAsString(N->getMangledName());
printField("", demangled);
OS << ')';
}
void visitBoundGenericTypeRef(const BoundGenericTypeRef *BG) {
printHeader("bound-generic");
auto demangled = Demangle::demangleTypeAsString(BG->getMangledName());
printField("", demangled);
for (auto param : BG->getGenericParams())
printRec(param.get());
OS << ')';
}
void visitTupleTypeRef(const TupleTypeRef *T) {
printHeader("tuple");
for (auto element : T->getElements())
printRec(element.get());
OS << ')';
}
void visitFunctionTypeRef(const FunctionTypeRef *F) {
printHeader("function");
printRec(F->getInput().get());
printRec(F->getResult().get());
OS << ')';
}
void visitProtocolTypeRef(const ProtocolTypeRef *P) {
printHeader("protocol");
printField("module", P->getModuleName());
printField("name", P->getName());
OS << ')';
}
void visitProtocolCompositionTypeRef(const ProtocolCompositionTypeRef *PC) {
printHeader("protocol-composition");
for (auto protocol : PC->getProtocols())
printRec(protocol.get());
OS << ')';
}
void visitMetatypeTypeRef(const MetatypeTypeRef *M) {
printHeader("metatype");
printRec(M->getInstanceType().get());
OS << ')';
}
void visitExistentialMetatypeTypeRef(const ExistentialMetatypeTypeRef *EM) {
printHeader("existential-metatype");
printRec(EM->getInstanceType().get());
OS << ')';
}
void visitGenericTypeParameterTypeRef(const GenericTypeParameterTypeRef *GTP){
printHeader("generic-type-parameter");
printField("index", GTP->getIndex());
printField("depth", GTP->getDepth());
OS << ')';
}
void visitDependentMemberTypeRef(const DependentMemberTypeRef *DM) {
printHeader("dependent-member");
printRec(DM->getBase().get());
printRec(DM->getMember().get());
OS << ')';
}
void visitAssociatedTypeRef(const AssociatedTypeRef *AT) {
printHeader("associated-type");
printField("name", AT->getName());
OS << ')';
}
};
TypeRefPointer decodeDemangleNode(Demangle::NodePointer Node) {
using NodeKind = Demangle::Node::Kind;
switch (Node->getKind()) {
case NodeKind::Type:
return decodeDemangleNode(Node->getChild(0));
case NodeKind::BoundGenericClass:
case NodeKind::BoundGenericEnum:
case NodeKind::BoundGenericStructure: {
auto mangledName = Demangle::mangleNode(Node->getChild(0));
auto genericArgs = Node->getChild(1);
TypeRefVector Params;
for (auto genericArg : *genericArgs)
Params.push_back(decodeDemangleNode(genericArg));
return BoundGenericTypeRef::create(mangledName, Params);
}
case NodeKind::Class:
case NodeKind::Enum:
case NodeKind::Structure: {
auto mangledName = Demangle::mangleNode(Node);
return NominalTypeRef::create(mangledName);
}
case NodeKind::BuiltinTypeName: {
auto mangledName = Demangle::mangleNode(Node);
return BuiltinTypeRef::create(mangledName);
}
case NodeKind::ExistentialMetatype: {
auto instance = decodeDemangleNode(Node->getChild(0));
return ExistentialMetatypeTypeRef::create(instance);
}
case NodeKind::Metatype: {
auto instance = decodeDemangleNode(Node->getChild(0));
return MetatypeTypeRef::create(instance);
}
case NodeKind::Protocol: {
auto moduleName = Node->getChild(0)->getText();
auto name = Node->getChild(1)->getText();
return ProtocolTypeRef::create(moduleName, name);
}
case NodeKind::DependentGenericParamType: {
auto depth = Node->getChild(0)->getIndex();
auto index = Node->getChild(1)->getIndex();
return GenericTypeParameterTypeRef::create(index, depth);
}
case NodeKind::FunctionType: {
auto input = decodeDemangleNode(Node->getChild(0));
auto result = decodeDemangleNode(Node->getChild(1));
return FunctionTypeRef::create(input, result);
}
case NodeKind::ArgumentTuple:
return decodeDemangleNode(Node->getChild(0));
case NodeKind::ReturnType:
return decodeDemangleNode(Node->getChild(0));
case NodeKind::NonVariadicTuple: {
TypeRefVector Elements;
for (auto element : *Node)
Elements.push_back(decodeDemangleNode(element));
return TupleTypeRef::create(Elements);
}
case NodeKind::TupleElement:
return decodeDemangleNode(Node->getChild(0));
case NodeKind::DependentGenericType: {
return decodeDemangleNode(Node->getChild(1));
}
case NodeKind::DependentMemberType: {
auto member = decodeDemangleNode(Node->getChild(0));
auto base = decodeDemangleNode(Node->getChild(1));
return DependentMemberTypeRef::create(member, base);
}
case NodeKind::DependentAssociatedTypeRef:
return AssociatedTypeRef::create(Node->getText());
default:
return nullptr;
}
}
void TypeRef::dump() const {
dump(std::cerr);
}
void TypeRef::dump(std::ostream &OS, unsigned Indent) const {
PrintTypeRef(OS, Indent).visit(this);
OS << "\n";
}
} // end namespace reflection
} // end namespace swift
#endif // SWIFT_REFLECTION_TYPEREF_H
|
/* Copyright (C) 2011-2012 Povilas Kanapickas <[email protected]>
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt)
*/
#ifndef LIBSIMDPP_DETAIL_SHUFFLE_NEON_INT64x2_H
#define LIBSIMDPP_DETAIL_SHUFFLE_NEON_INT64x2_H
#if SIMDPP_USE_NEON
#include <type_traits>
namespace simdpp {
namespace SIMDPP_ARCH_NAMESPACE {
namespace detail {
namespace neon_shuffle_int64x2 {
#if SIMDPP_USE_NEON32
/*
The code below implements generalized permutations of elements within
int64x2 vectors using half-vector move instructions available on NEON.
*/
using T = uint64x2; // full vector
using H = uint64x1_t; // half vector
/// Returns the lower/higher part of a vector. Cost: 0
SIMDPP_INL H lo(T a) { return vget_low_u64(a.native()); }
SIMDPP_INL H hi(T a) { return vget_high_u64(a.native()); }
/// Combines two half vectors. Cost: 0
SIMDPP_INL T co(H lo, H hi) { return vcombine_u64(lo, hi); }
// 2-element permutation
template<unsigned s0, unsigned s1> SIMDPP_INL
T permute2(T a)
{
const unsigned sel = s0*2 + s1;
switch (sel) {
default:
case 0: /*00*/ return co(lo(a), lo(a));
case 1: /*01*/ return a;
case 2: /*10*/ return co(hi(a), lo(a));
case 3: /*11*/ return co(hi(a), hi(a));
}
}
// 2-element shuffle: the first element must come from a, the second - from b
template<unsigned s0, unsigned s1> SIMDPP_INL
T shuffle1(T a, T b)
{
const unsigned sel = s0*2 + s1;
switch (sel) {
default:
case 0: /*00*/ return co(lo(a), lo(b));
case 1: /*01*/ return co(lo(a), hi(b));
case 2: /*10*/ return co(hi(a), lo(b));
case 3: /*11*/ return co(hi(a), hi(b));
}
}
template<unsigned s0, unsigned s1> SIMDPP_INL
T shuffle2x2(const T& a, const T& b)
{
const unsigned sel = s0*4 + s1;
switch (sel) {
default:
case 0: /*00*/ return co(lo(a), lo(a));
case 1: /*01*/ return a;
case 2: /*02*/ return co(lo(a), lo(b));
case 3: /*03*/ return co(lo(a), hi(b));
case 4: /*10*/ return co(hi(a), lo(a));
case 5: /*11*/ return co(hi(a), hi(a));
case 6: /*12*/ return co(hi(a), lo(b));
case 7: /*13*/ return co(hi(a), hi(b));
case 8: /*20*/ return co(lo(b), lo(a));
case 9: /*21*/ return co(lo(b), hi(a));
case 10: /*22*/ return co(lo(b), lo(b));
case 11: /*23*/ return b;
case 12: /*30*/ return co(hi(b), lo(a));
case 13: /*31*/ return co(hi(b), hi(a));
case 14: /*32*/ return co(hi(b), lo(b));
case 15: /*33*/ return co(hi(b), hi(b));
}
}
#else // SIMDPP_USE_NEON64
using T = uint64x2; // full vector
// Moves the high half of b onto high half of a
SIMDPP_INL T move_hi(const T& a, const T& b)
{
T mask = make_uint(0xffffffffffffffff, 0x0);
return vbslq_u64(mask.native(), a.native(), b.native());
}
// 2-element permutation
template<unsigned s0, unsigned s1> SIMDPP_INL
T permute2(const T& a)
{
const unsigned sel = s0*2 + s1;
switch (sel) {
default:
case 0: /*00*/ return vzip1q_u64(a.native(), a.native());
case 1: /*01*/ return a;
case 2: /*10*/ return vextq_u64(a.native(), a.native(), 1);
case 3: /*11*/ return vzip2q_u64(a.native(), a.native());
}
}
// 2-element shuffle: the first element must come from a, the second - from b
template<unsigned s0, unsigned s1> SIMDPP_INL
T shuffle1(const T& a, const T& b)
{
const unsigned sel = s0*2 + s1;
switch (sel) {
default:
case 0: /*00*/ return vzip1q_u64(a.native(), b.native());
case 1: /*01*/ return move_hi(a, b);
case 2: /*10*/ return vextq_u64(a.native(), b.native(), 1);
case 3: /*11*/ return vzip2q_u64(a.native(), b.native());
}
}
template<unsigned s0, unsigned s1> SIMDPP_INL
T shuffle2x2(const T& a, const T& b)
{
const unsigned sel = s0*4 + s1;
switch (sel) {
default:
case 0: /*00*/ return vzip1q_u64(a.native(), a.native());
case 1: /*01*/ return a;
case 2: /*02*/ return vzip1q_u64(a.native(), b.native());
case 3: /*03*/ return move_hi(a, b);
case 4: /*10*/ return vextq_u64(a.native(), a.native(), 1);
case 5: /*11*/ return vzip2q_u64(a.native(), a.native());
case 6: /*12*/ return vextq_u64(a.native(), b.native(), 1);
case 7: /*13*/ return vzip2q_u64(a.native(), b.native());
case 8: /*20*/ return vzip1q_u64(b.native(), a.native());
case 9: /*21*/ return move_hi(b, a);
case 10: /*22*/ return vzip1q_u64(b.native(), b.native());
case 11: /*23*/ return b;
case 12: /*30*/ return vextq_u64(b.native(), a.native(), 1);
case 13: /*31*/ return vzip2q_u64(b.native(), a.native());
case 14: /*32*/ return vextq_u64(b.native(), b.native(), 1);
case 15: /*33*/ return vzip2q_u64(b.native(), b.native());
}
}
#endif
} // namespace neon_shuffle_int64x2
} // namespace detail
} // namespace SIMDPP_ARCH_NAMESPACE
} // namespace simdpp
#endif
#endif
|
The Value of Aging Trees
A big red oak on top of Sapsucker Ridge
A big red oak on top of Sapsucker Ridge
A giant eastern hemlock tree in Central Pennsylvania
A giant eastern hemlock tree in Central Pennsylvania
Photographing downy rattlesnake plantain in our deer exclosure
Photographing downy rattlesnake plantain in our deer exclosure
April Journal Highlights (2)
Close encounters of the avian kind
April 18. The sun warmed the Far Field, and as I walked Pennyroyal Trail, a towhee sang, a flicker called, and a ruby-crowned kinglet sang. I stopped to “pish,” hoping to entice the kinglet into view, and I did. He flew on to a tree branch, erected his ruby-crown, and sang, giving me my first look at what I had been hearing for weeks.
I went on to the woods beyond the Far Field where a brown-headed cowbird sang and a ruffed grouse crept off into the underbrush. I imagine he was the drummer I stalked back in early April. Sitting still on a moss-covered, old log, I also heard a red-bellied woodpecker, eastern towhee, and northern flicker as the dead leaves rustled in the wind.
The sun quickly disappeared, and I picked my way through the woods until I encountered two excited white-breasted nuthatches on a tree trunk. At first I thought they were courting, but then I realized that they were drinking from sap wells. They were quickly driven off by a male yellow-bellied sapsucker.
As soon as he disappeared higher in the tree, the female nuthatch returned for a few furtive sips. Still, the sapsucker quietly worked on new wells, sipped from old ones, and chased off a ruby-crowned kinglet. Occasionally the male sapsucker flicked his wings as he worked or flew over to an adjacent grapevine as if to rest. Surely there is no tasty sap in a grapevine. The irrestible sap wells are on a pignut hickory, as usual, and it is encircled up its trunk with old sap wells.
The nuthatches returned, calling softly, as they drank from the lower sap wells while the sapsucker worked high in the tree drilling new ones. At last I left the relatively peaceful scene, two species sharing one resource.
April 20. I used my turkey call as I sat in the spruce grove and called in a hen turkey. She came close to my hiding place at the edge of the grove and then retreated back to the edge of the woods along First Field Trail, clucking all the way. I’ve never called in a hen before, but according to one of our turkey hunters, that’s not unusual. Still, experts disagree on why they respond to a hen call. Is she already setting on eggs and defending her territory? Is she a scout for a male turkey or trying to keep rivals from joining “her” gobbler? Is she recruiting more hens for “her” gobbler? Is she merely curious? Are there reasons that we can’t even imagine?
Then, walking back on the Far Field Road, I scared up a gobbler. He, of course, saw me and ran, but I did get a quick look at his long beard. Was he still searching for hens? If only I had tried the hen call along the road. Oh well! It’s obvious that the turkeys are restless and have perhaps not gotten together yet due to the cold.
Above the barn on Butterfly Loop at dusk, the woodcock called, turning around to direct his call in all directions as we watched from a respectable distance.
Gray squirrels and masked shrews: social behavior
April 21. At least three young gray squirrels were born in the black walnut tree nest hole beside the driveway. Today they emerged for the first time, or at least two of the three did. I sat watching on the veranda as first one emerged and stayed out, exploring nearby branches. Then the second emerged more briefly and stayed closer to the nest hole before going back into it again. Each squirrel chewed about the hole entrance, hanging upside down before emerging. When both squirrels were out, a third one peered timidly out of the hole, but stayed inside. All their climbing about, peering in and out of the hole, even their chewing was silent. But scolding from a distant adult squirrel sent them all back into the den hole with one looking out. Three adults harvested black walnuts on the lower lawn.
The first six-spotted tiger beetle gleamed bright green on the driveway.
April 23. The gray squirrel family, even the shy one, played in, out, and around their nest hole as we watched from the veranda.
April 24. I heard a black-throated green warbler in the woods near the powerline right-of-way singing both his songs. As I stood listening and watching, a masked shrew dashed in and out of the leaf duff along an old, barkless, fallen tree. I sat quietly, watching for the shrews, and heard the first blue-gray gnatcatcher of the season. As I continued on the trail, a pair of mallards flew past on the powerline right-of-way, heading toward the First Field. Were they the same mallards Dave saw earlier in the morning? Had they gone back to Sinking Valley? Who knows? But at least I saw them.
More masked shrews chased in the woods on the other side of the powerline right-of-way. They crossed right in front of me for several minutes so I sat down on the trail and watched as they dashed back and forth across the trail, always using the same pathway at my feet. They were tiny, grayish-brown, with peculiarly-shaped snouts that identified them as masked shrews. I counted half a dozen or more chasing about. They were silent to my ears except for the rustling in the leaves. The books say that they are looking for food, but I only see this phenomenon in April and sometimes in July and I think it has to do with mate-chasing. None of the books say anything about their sex life. I suspect they have two broods a year, but I can’t prove it. Finally they stopped and I continued my walk.
The return of the wood thrush
April 27. Sitting on the veranda reading near dusk, we heard the first whip-poor-will of the season singing above the garage at dusk.
April 28. A pair of northern flickers checked out the black walnut tree squirrel den. Were they waiting until the young gray squirrels leave so they could take over the nest hole?
April 29. I stepped outside early to listen for the wood thrush, but the towhees were so loud they blocked out more distant sounds. Still, I did hear a faint portion of a wood thrush song. I stopped and gave thanks that another spring had come and with it wood thrush music–three months of heavenly singing before they once again leave us.
On Dogwood Knoll a rose-breasted grosbeak sang. And then, as I descended the knoll on a path of blooming dwarf cinquefoil, I heard the singing of a Louisiana waterthrush above the dark place. Halleleujah! We have at least one singing male. I sat on Turkey Bench to listen to his ringing tones.
Down near the bottom of the mountain I heard the “tick-tick” scolding tones of another Louisiana waterthrush. I rested on a moss-covered log beside the stream, still hearing but not seeing the waterthrush.
Bruce came down the road and a small, black, white and orange moth spun around his hat and landed briefly on it. Then it landed on my hat and Bruce photographed it. It was a grapevine epimenis, Psychomorpha epimenis — an early, day-flying moth whose caterpillar feeds on grapes. Truly a beautiful little creature.
In mid-afternoon, Steve pointed out a black vulture sailing over First Field.
April 30. At breakfast I watched a northern flicker throwing to the wind the remains of the squirrel nest in the walnut tree. Those flickers had been checking on the den every day, evidently waiting until the squirrel family dispersed.
Walking up Guesthouse Trail, I finally heard the wood thrush singing clearly. Wild black cherry and striped maple trees have leaved out and already my view into the woods has diminished.
See also my post at the Plummer’s Hollow blog, Spring wildflowers: back on track.
Shrew Business
In the gray, gathering gloom of an imminent February snowstorm, I stopped to watch a northern short-tailed shrew foraging on the edge of our powerline right-of-way. On this day it was a breezy 22 degrees Fahrenheit and patches of bare earth alternated with patches of frozen snow.
The shrew had scuttled past a mere five feet away. Then it paused and used its long, mobile, cartilaginous snout to poke in leaf litter and dried grasses in search of food. Next it pushed its snout under a snowy patch for ten minutes and busily ate whatever it had found.
That was when I slowly eased myself down on my “hot seat” to watch it. The shrew was too close to focus my binoculars and remained oblivious to my presence. It pursued its prey vigorously, its pointed snout questing, its clawed back feet pumping, its front feet digging like a frantic terrier. Once it pulled what looked like a caterpillar from beneath the leaf litter and chomped it down.
After almost 45 minutes of high-octane hunting and eating, the shrew ran under a log at the edge of the woods. Probably it was returning to its apple-sized resting nest. Constructed of grasses, sedges, and leaves in the shape of a hollow ball, the nest is located as much as six to 16 inches below ground or beneath logs, stumps or old boards. From the nest, openings lead to a complex underground burrow system that includes separate food-caching locations and latrine areas.
Mostly northern short-tailed shrews (Blarina brevicauda) sleep in the winter to reduce their need for food. But such periods are alternated with intense, active hunting periods that usually occur below the snow cover where it is warmer. Researchers claim that northern short-tailed shrews spend only brief periods above ground during cold weather, but this one, at least, was undeterred by the damp cold.
On a much warmer day in late May I heard a rustle in the dry leaves beside First Field Trail. Again it was a northern short-tailed shrew. This time it ranged over the forest floor, in and out of the leaf litter, along the sides of fallen logs, and atop beds of mosses as it looked for food. Once it tried to grab a centipede but missed. It also lunged twice at a small toad. Finally, it disappeared underground.
I have been enamored with these small, fierce creatures ever since I discovered my first northern short-tailed shrew dashing frenetically around the bottom of an old bucket I had set in our basement sink one winter almost three decades ago. I knew it was not a mouse but just what it was puzzled me. After a little research, I identified it and discovered to my surprise that it was only one of seven species of shrews living in Pennsylvania and 312 species worldwide! Found on every continent but Antarctica and Australia, shrews make up 25% or more of the species richness in northcentral and northeastern North America, especially in wet sites.
Shrews belong to the family Soricidae and first evolved soon after dinosaurs disappeared 38 million years ago. Since then they have remained almost unchanged. Most have five clawed toes on each foot, a long, pointed snout that extends beyond their jaw, a wedge-shaped skull, and sharp, pointed teeth. Those in the eastern United States possess minute eyes and a highly-developed sense of smell and hearing.
Of the seven Pennsylvania species, five belong to the Sorex genus (Sorex being the Latin word for “shrew”) and, except for the larger water shrew (S. palustris), are difficult to distinguish in the field.
Water shrews are primarily denizens of rocky-bottomed, rushing mountain streams in forests of hemlock, spruce, and rhododendron although they also have been found in bogs, dry creek beds, and near small springs. At 5.6 to 6.2 inches in length, they are the longest of Pennsylvania’s shrews. However, they are outweighed by northern short-tailed shrews, the largest shrew species in the state. Water shrews are further distinguished by their long, bicolored tails and bodies that are gray above and white below.
Their large, broad hind feet are equipped with stiff hairs on the sides of their toes that hold globules of air. This allows them to perform the seemingly miraculous feats of walking and running on the surface of water. They are adept swimmers and divers even under ice in the winter as they pursue their small aquatic prey. These are shrews I would love to see in action, but they are rare enough to be listed as a threatened species in Pennsylvania.
One Sorex species I have positively identified on our property is the smoky shrew (S. fumeus). I found one dead in the woods, popped it in our freezer, and later took it to Dr. Joseph T. Merritt, shrew expert and Resident Director of Powdermill Nature Reserve, the biological field station of Pittsburgh’s Carnegie Museum of Natural History. (For more information on Merritt’s work with northern short-tailed shrews, see my February 1997 column.) Merritt was quickly able to distinguish my smoky shrew from other Sorex species because it is larger (4.3 to 5 inches) than all but the long-tailed shrew.
Also called “gray shrews,” they like damp, dark woods with dense ferns and other ground cover shaded by a canopy of second-growth timber–the kind of environment we have here–and they eat small invertebrates of the leaf litter. Smoky shrews live with or near deer and woodland jumping mice, red-backed and pine voles, hairy-tailed moles and northern short-tailed shrews, and they use burrows of other small woodland animals. When disturbed they are liable to throw themselves on their backs and wave their legs while emitting grating, high-pitched calls.
Long-tailed shrews (S.dispar) resemble smoky shrews except for their longer tails, slimmer bodies, and darker coloration. But they prefer to live under and among rocks or boulders, especially talus slopes, hence their other name “rock shrews.” In Pennsylvania their favorite food is centipedes.
Then there are the two smallest, look-alike Sorex species–the masked shrews and pygmy shrews. They can only be told apart by examining their teeth, even though the pygmy shrews (S. hoyi) have the honor of being the smallest mammals in North America. Although they prefer dry mountain habitats, they can also be found in open fields and along the edges of woods. They make their tiny burrows below stumps, fallen logs, and forest leaf carpets and eat twice their weight (that of a dime) in insect larvae, spiders and beetles every day. Unlike most shrew species that produce two to three litters a year, they have only one.
I’m almost certain we have masked shrews (S. cinereus) here and so is Merritt because of the behavior I have observed. Over the years I have often been stopped by rustling and twittering sounds in the leaf litter. Tiny bodies dash in and out of the leaf cover and under and along logs. Sometimes they are so intent on what they are doing that I can sit on a log while they run under my legs. It is difficult to get a good look at them and an accurate count of their numbers, but I’ve seen up to five at a time. Masked shrews are most famous for this behavior, yet so far no researcher has positively figured out why they do it. Some think it is connected with courtship (they do have three litters of four to ten young a year) and others believe it is connected with food gathering. C.R. Vispo observed running masked shrews in a mountain forest in western North Carolina and found that their stomachs were stuffed with fly larvae. Another researcher mentioned that the larvae of some flies may travel in long, snake-like masses over the forest floor. Shrews, with their keen noses, would easily detect such a phenomenon. On the other hand, the running I have observed has occurred only in the spring and summer when masked shrews are courting, mating, and raising young.
Masked shrews are the most widely distributed shrews in North American, living in Alaska, across Canada, and south into the northern half of the United States. They need a shaded, moist habitat and eat tiny mollusks, insects and their larvae, small worms, and the carrion of larger animals.
Finally, there are the endangered-in-Pennsylvania least shrews (Cryptotis parva) that look like smaller versions of northern short-tailed shrews. Also called “bee shrews” because they enter beehives in search of larvae and pupae, their major foods, they like drier habitats than the other shrews. A grassland species, they live in grassy, weedy and brushy fields and forage in the runways of meadow voles. Least shrews breed from March to November and are sometimes found together in nests, as many as 12 and 31 in Texas nests and 31 in Virginia. Although most shrew species are solitary creatures except during courtship and breeding, researchers believe that both least shrew parents care for their offspring.
Shrews in general communicate mostly through scent and vocalizations and the masked, northern short-tailed and water shrews use echolocation especially when they are exposed to strange situations. Researchers think that echolocation may be a way for them to explore a new habitat without attracting predators. Of those they have plenty–owls, house cats, hawks, opossums, raccoons, snakes, foxes, weasels, bobcats, and herons, to name a few. Because they possess high metabolisms, they must eat almost constantly and most small shrews die of old age after a year. Only northern short-tailed shrews live longer–2.5 years.
Much more needs to be learned about these tiny, voracious, pugnacious creatures. Over the last couple decades, shrew research has expanded. As the t-shirt of Pennsylvania wildlife biologist Jim Hart puts it, “There’s no business like shrew business!”
Winter Survival Champions
“It’s amazing. I can’t believe there’s anything here,” exclaimed Dr. Joseph Merritt.
Resident Director of Powdermill Nature Reserve, the biological field station of Pittsburgh’s Carnegie Museum of Natural History and author of Guide to the Mammals of Pennsylvania, Merritt is a specialist in small mammals. To learn more about the lives of such creatures as deer and white-footed mice, woodland jumping mice, southern red-backed voles, southern flying squirrels, eastern chipmunks and short-tailed shrews, Merritt has been live trapping them for over sixteen years. On a cold afternoon in February, with two inches of fresh snow on the ground, my husband Bruce and I were accompanying him on his rounds.
The January flood of 1996 had occurred less that a month before our visit. A portion of Merritt’s trapping site along Powdermill Run had been under water and the traps and their protective chimneys had washed away. Merritt had quickly replaced them and, to his amazement, was finding new animals, specifically short-tailed shrews, in that area. Somehow they had learned that the previous residents were gone (presumably drowned) and were already claiming new territory in the winter. Of the eight short-tailed shrews he trapped that afternoon and the following morning, four were new animals in the flood zone and four were re-captures from higher ground. He also found two flying squirrels and two red-backed voles in traps further up the slope, the latter among large rocks that we clambered over and around.
Merritt sets two metal Sherman box traps, protected from the elements by a wooden chimney, ten meters (about 33 feet) apart in his wooded study site of one hectare (about two and half acres) by one hectare, and has a total of 200 trapping sites in all. Baited with sunflower seeds and padded with synthetic-fiber nesting material, the traps are monitored in early morning and late afternoon three days of one week, every month of the year. The short-tailed shrews are toe-clipped because of their tiny ears, the other mammals are ear-tagged for identification so Merritt can distinguish each captive. He also weighs and sexes them and checks their reproductive status.
In the summer he may find as many as 100 traps occupied; in the winter the number is much smaller. That’s because the woodland jumping mice are hibernating, having reduced their body temperature from a normal 98 degrees to 33. Many of the others, such as chipmunks, flying squirrels, and mice, undergo temporary torpor, reducing their temperature to 60 degrees and then periodically arousing to eat stored food. Southern red-backed voles and short-tailed shrews, however, are active all winter long.
Although Merritt studies small mammals year round, he is particularly interested in their winter survival techniques. Years ago, as a graduate student at the University of Colorado in Boulder, he had monitored the individual development and population changes of wintering southern red-backed voles at 11,000 feet in the spruce-fir forests of the Rocky Mountains. Eight days a month he snowshoed into his site and found each trap by digging through snow beneath markers in the trees. He was still snowshoeing in and brushing snow from his traps in early June when he made an electrifying discovery–a two week old vole born under the snow two months before the snow would be gone. In other words, southern red-backed voles not only survived in the winter successfully under a heavy snow pack but they reproduced.
This discovery solidified Merritt’s interest in winter ecology, a discipline pioneered by Russian scientists in the 1930s and 40s. But instead of staying in Colorado, Merritt, a California native, came to Powdermill.
“Once I saw the Appalachian forests, I was hooked,” Merritt told us. Even after sixteen years, he continues to be delighted by the beauty and diversity of eastern woodlands.
His study site at Powdermill is in a mature second growth forest consisting primarily of American beech, yellow poplar or tulip, sugar maple, cucumber magnolia and red oak trees with an understory of striped maple trees, spicebush, witch hazel, and rhododendron. Selectively logged in the early 1900s, the site now has many large trees and a small mammal population of approximately 250.
At Powdermill Merritt is able to study the winter survival tactics of the same southern red-backed vole (Clethrionomys gapperi) that he studied in Colorado, because it ranges in the west along the Rocky Mountains to New Mexico and Arizona, across Canada and the northern United States, and in the east along the Appalachian Mountains south to northern Georgia. This beautiful little creature is easily identifiable by a broad, reddish band running from its forehead to its rump and its bicolored tail, dark brown above and whitish below. It lives in coniferous, deciduous and mixed forests with an abundance of rotting logs, stumps and exposed roots and eats nuts, seeds, berries, mosses, lichens, ferns, mushrooms, plants and arthropods.
After twelve years of live trapping them at Powdermill Merritt determined that they ranged in density from five to 36 voles a hectare. Their winter survival techniques include: reducing their body size in the autumn and winter so they need less food; shifting their food preferences to readily available seeds, roots, bark, and plant parts; foraging under the snow and in subterranean burrows where they are not affected by snow or bad weather and where it is warmer; and engaging in non-shivering thermogenesis (heat production) or NST.
NST is an important winter survival technique not only for red-backed voles but for short-tailed shrews as well. Between their shoulder blades near their spinal cords, they have high energy, heat-producing tissue called brown adipose or brown fat which functions like a blanket to keep them warm. Merritt found that it was especially effective during mid-January, the coldest days of the year at Powdermill.
Despite all these techniques, however, red-backed voles lose weight throughout the winter. In contrast, short-tailed shrews actually gain weight. Merritt calls them the “champions at winter survival.”
Northern short-tailed shrews (Blarina brevicauda) are the largest of North American shrews and one of our most abundant mammals. They live in a wide variety of environments–forests, fields, thickets and grasslands–wherever there is a thick layer of leaf litter and humus where they can construct their intricate underground burrow system. There they hunt invertebrates such as spiders, centipedes, slugs, snails and earthworms as well as salamanders, mice, voles, mushrooms, and plant material. Often, when sitting under a tree in our woods, I can hear the high-pitched squeaking sounds they make. They also use ultrasonic sounds to echolocate objects in their dark burrows.
Short-tailed shrews are smelly creatures, emitting a musky odor produced by oily skin glands on their sides and bellies, which is why cats will kill but not eat them. They are also one of only a few poisonous mammals in the world. Their toxin, similar to that of a cobra’s, immobilizes but does not kill their prey. It slows down their victim’s heart beat, blood pressure, and respiration, making it comatose so it can be stored for three to five days and provide fresh food whenever it is needed. To make sure no other animal eats their prey, they mark it by urinating and defecating on it and cache it in abandoned mice nests. This food caching ability is one of their winter survival techniques.
Until Merritt conducted radiotelemetry studies on short-tailed shrews, they were thought to use huddling as another winter survival technique. Flying squirrels, mice and voles all form group-huddles in a communal nest which help to keep them warm. Not so the belligerent, highly territorial, and individualistic short-tailed shrews. They live alone year round. But the underground nests they build are so well insulated that they are considerably warmer than the above ground temperature. The soil-leaf litter zone where they forage primarily for insects and insect larvae in the winter can be more than 50 degrees warmer than the outside temperature in mid-January and as much as 59 degrees warmer within their tunnels. They also reduce their activity during the coldest periods, staying active only seven to 16 percent of the day. The rest of the time they sleep.
As we accompanied Merritt on his rounds we were impressed by the dedication that drives him on, even in a light snowfall the following morning, even after a couple tumbles on the rocky mountainside, setting and checking hundreds of traps day after day, year in and year out. He admitted that in the milder months he often has help, but in the winter he is usually on his own. He works quickly and efficiently, handling the creatures as little as possible. No matter what the weather, if the traps are set, he checks them, even during the flood when the bridge spanning Powdermill Run was under water.
“I don’t want them to die,” he explained. Unlike some small mammal biologists, he does not kill any of his study animals. In fact, during an earlier visit to his site, in late October, I was struck by the affection he seemed to have for all the small mammals he showed us. He was especially concerned for the wellbeing of his short-tailed shrews.
“Shrews make life real difficult,” he told us. “They’re so temperamental they die if it thunders.” They also chitter loudly the first time they are captured, even before he toe-clips them, something he does not like to do. But it is the only way to get the kind of information on population dynamics that he needs. And after their first capture, the repeats do not protest at all. Many, in fact, are seemingly happy to eat the sunflower seeds in return for a brief minute or two of handling.
During our winter visit, Merritt added to his data. He learned that short-tailed shrews move into a desirable, deserted territory fast, even in winter. One of the newcomers was a big animal–21.5 grams and two weighed 14.5 grams. Although short-tailed shrews cannot be sexed unless they are nursing, he could sex the red-backed voles. One was a male probably born last summer. Another was an unusually big female–30 grams. And one of the flying squirrels, who had never been caught in that particular box before, was a female with a swollen vulva, an indication that she was beginning her reproduction cycle. Like all the flying squirrels he captures, summer and winter, new captures and repeaters, she screamed like a banshee until he released her. Then she flew to one of the 30 flying squirrel boxes hammered on to the trees in the study site.
“I never get tired of seeing them fly,” Merritt commented. Despite what seems at times, to be repetitive, difficult work, Merritt gets more joy from his work than most people, I suspect. To interact with such a charming cast of characters as the diverse small mammals of an Appalachian forest has its rewards and I felt privileged to have had a close-up view of creatures I have often observed from afar in our own Appalachian forest.
|
is unknowable. Check. Omnipresent, omnipotent, and omniscient. Check. Most religions have something like this at their root. There is a reason for that. Look something that could look at the whole of space-time at a glance does not exist in any way shape or form that we could understand. It does not think like us, it does not experience emotion or life the way we do.
It is, by its very nature, completely foreign to us. It doesn’t have a gender as we understand it because it is not a species of creature that we could identify. In all seriousness, if it deigned to manifest itself in such a small range of dimensions as to even be perceivable to a human being, it could probably appear as a giant yellow lollipop if it wanted to. IF it wants ANYTHING.
Trying to understand what God is draws a close comparison to an amoeba attempting to classify us based on its life experiences. We are quite literally incomprehensible to that amoeba. It can’t even imagine that we exist.
We, as a species, need to put religion away as a topic of conversation. Yes, there is almost definitely something out there that is so much more vast and powerful than we are that we would call it a god. Get over it. We can’t understand it. Trying is futile. Move on.
We as an organized species need to put away our anger and begin working toward the common good. No nations, no religions, no ethnicity, no gender, just homo-sapiens. We as a species need to begin working on medical nanotechnology, gravitational manipulation, and wireless transmission of energy, because each of these things is possible, and if done without greed and made affordable, these technologies could absolutely change everything. We need to allow our accomplishments and contributions to speak for what we each hold dear.
We need to look on each other as brothers and sisters, because every living thing on this planet is made of the same genetic foundation. We need to begin building toward galactic colonization because we, as a species, are destroying the planet we live on, and when some religious nutbar or political extremist drops the first bomb, we’re all out of luck.
We won’t have the technology to save the planet, and we won’t have the technology to leave it. We will have doomed ourselves to a slow and agonizing extinction while our planet dies.
Where I come from they call that “Up Shit Creek.”
Leave a Reply
WordPress.com Logo
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
%d bloggers like this:
|
This book was the chosen book for the Celebration of Reading campaign, 2012
A Dog Named Worthless:
A Hero Is Born Children's Book
for K-6th Grade Humane Education
A Dog Named Worthless: A Hero Is Born
Written and illustrated by Rocky Shepheard
Welcome to Dogs Deserve Better's first Fantasy Action/Adventure Fiction picture book for kids and adults alike! Disney-esque in tone, the book is beautifully illustrated and written by Rocky Shepheard, a long-time supporter and advocate for chained dogs. The book is full color, hardback for better protection and sturdiness, and comes to life on 32 pages.
Worthless is a chained dog who has never lived inside the house. He suffers through cold winters and hot summers only with the help of his two friends—Otto and Sly Fox. His friends plot to free him and they set off on an adventure to look for a new life somewhere where there are no chains.
They search for days in snow and ice until they find a place on the edge of a pond to hunker down for the winter.
But fate intervenes and presents an opportunity for Worthless to finally prove to himself that he is worthy of love and a good home. Will Worthless have the courage to face his fears and become the dog he has always wanted to be?
About Worthless: Worthless was the name of a real dog, he was the reason that Dogs Deserve Better was founded and that's why the book was named after him. Someone actually named their dog Worthless, can you believe that? Read about his rescue here. It was fitting that the winning contest model for the book was another black lab named Maggie, beloved companion of Joe Maringo of SPARRO.
Dan Piraro - Internationally acclaimed cartoonist and creator of Bizarro:
"'A New Name for Worthless: A Hero is Born' is, like it's title character, anything but worthless. This is an exciting story with a full range of emotions that kids will love and adults will take pause to think about.
The same story that is lovingly illustrated within plays itself out in communities all over the world and the lessons learned from this story are simple but so important. This book will lead readers of all ages to more fully understand the true nature of "man's best friend" with a common-sense, compassionate approach that can change the world for the better."
– Dan Piraro, creator of "Bizarro"
Cia Bruno, Esq. - New York Animal Rights Advocate:
"A masterfully captivating illustrative theme! In A New Name for Worthless: A Hero is Born, the author skillfully introduces several elements of conflict and resolution that are rich in
opportunity for mutual exploration between parent and child.
The dominant message being that 'all' sentient creatures are worthy of our respect for their needs and existence."
— Cia Bruno, Attorney at law and advocate for all sentient creatures at www.meaningfuladvocacy.com
Lorraine Chittock -Photographer and Author:
"Wow. I love it. Finally a book appears which addresses a pressing dog
issue, but doesn't come across as preachy or pandering. From the first
page to the last, the reader is drawn into the troubles of Worthless,
and captivated by a wonderful story juxtaposed with exquisite
paintings. For children this book is a must. For adults, it signifies
hope and the progress being made for the lives of chained dogs all
over the world."
Travel books exploring our unique bond with animals
DDB Founder Tamira Thayne:
"When Rocky presented his idea, I was instantly intrigued by a story that is less about the reality of everyday chaining, and more a fun fiction tale featuring animal friends and foes. I was delighted with his storyline, and even more enamored with his illustrations. I hope that A New Name for Worthless: A Hero is Born ends up on every dog lover's bookshelf, because that's where it deserves to be!"
– Tamira C. Thayne, founder and CEO of Dogs Deserve Better www.dogsdeservebetter.org
Robin Helfritch Co-founder of Open The Cages Alliance:
"Rocky Shepheard's inspiring and delightful book, “A New Name for Worthless: A Hero is Born” will warm your heart and leave you smiling. Beautifully illustrated, it tells the tale of “Worthless,” a neglected dog who is callously left by his uncaring human “owner” to live his life chained to a rickety wooden box in extreme temperatures. This is, unfortunately, the fate for far too many dogs. Luckily for Worthless, he has good friends, Sly Fox and Otto the Otter. Through teamwork, bravery and a little luck, Worthless's life takes a turn for the better!
This book teaches the importance of compassion, friendship, collaboration, acceptance, and the fact that all deserve freedom from oppression. It is perfect for children and adults of all ages, and the life lessons learned in this book will resonate within all who read it."
– Robin Helfritch
Co-founder, Open the Cages Alliance
Catherine Hedges founder of Dont Bully My Breed, Inc.:
"A New Name for Worthless; A Hero is Born, is a wonderful combination of fantasy and reality, and a reminder that no dog is born "Worthless" but, any dog, in the hands of the wrong person, can be treated as worthless. The book shows children everyone has worth and is a great lesson in self esteem, as well as emphasizing the importance of loyalty and friendship and what can be accomplished when friends work together. It shows the importance of compassion and love and should inspire kids to ask questions about animal cruelty and neglect. It is evident in Shepheard's heartwarming story and stunning illustrations that he is a person with a great heart and one hopes that his message will impact every adult or child who reads it."
– Catherine Hedges
Founder of Dont Bully My Breed, Inc.
Nikki Brown - The UK DOG WHISPERER from Canine Angel:
"'A New Name for Worthless: A Hero is Born' is just such a delightful book with a really simple yet powerful message about how these creatures that we humans call “Dogs”, can teach us all the important lessons of forgiveness, love, devotion, courage, bravery, living in the moment and never holding a grudge.
The story is about how this dog views his world after being chained up to a dog house all his life suffering all weather conditions and being given the name of “Worthless “ by his human owner, reminding us all that neglect and cruelty still exists in today’s world.
The beautiful illustrations help to ignite your imagination and take you into this dogs world where all he ever wants is the opportunity to show he can be the dog that he was born to be, and show that even though he has been mistreated he can still find it in his heart and soul to forgive, love unconditionally and become man’s best friend.
A great read for kids and adults, and every dog owner or potential dog owner needs to have this in their book collection."
– Nikki Brown – The UK DOG WHISPERER – Canine Angel
Leigh-Chantelle -Australian Vegan Activist/Singer Songwriter:
"Worthless the Dog is much more than his namesake. His friends Sly the fox and Otto the sea otter unite to free him from his unbearable life chained to a rickety wooden box, all Worthless has known as a home.
Beautifully illustrated by Rocky Shepheard and written in the first person from Worthless’ point of view, A New Name for Worthless: A Hero is Born is a wonderful story of courage, friendship and bravery.
Families can read this wonderful book together and learn lessons in patience, humility, forgiveness and respect, all taught by Worthless the dog.
This is a great resource book for Humane Education released by the not for profit organisation Dogs Deserve Better who believe in respecting, freeing and enhancing the freedom of all chained animals."
Founder of Green Earth Day, www.greenearthday.net
Viva La Vegan www.vivalavegan.net
Performing Artist www.leigh-chantelle.com
Veterinarian / Activist Dr. Armaiti May:
"What an inspiration! Engaging as it is educational, this children’s book brings to light the little-known problem of dogs being neglected and left to languish on the end of a chain rather than with their family inside the home.
This dog’s story of hardship, collaboration with his fellow animal friends, and ultimate freedom from unfair confinement warms the heart. It is beautifully told and fills the reader with empathy for the dog’s predicament as well as admiration for his determination to free himself and find a more loving home."
– Armaiti May, DVM
Using age-appropriate messaging,
this book targets the following Humane Education
aspects for coursework and can be used to stimulate discussion
about these issues with students and children:
• Dog Chaining
• Importance of friends
• Working as a team
• Wildlife education
book follows the success of Puddles On The Floor, by Lorena Estep and illustrated by Tamira C. Thayne, and is only the second children's book created specifically for Dogs Deserve Better. It can be bought in package with Puddles at a discounted price, see below for package deals.
book is perfect for family fun reading, and for humane
education from kindergarten-through 6th grades. If
you're a nonprofit who would like to buy wholesale for fundraising
purposes, please call us at 757-357-9292 for pricing options.
you'd like DDB to come to your school
or group for a reading/visit, please call 757-357-9292
or e-mail [email protected].
Now Take Phone Orders at 1.877.636.1408
or mail your order to 1915 Moonlight Rd., Smithfield, VA 23430.
A Dog Named Worthless: A Hero Is Born Book
Written and Illustrated by Rocky Shepheard
Copy Pak of
A Dog Named Worthless: A Hero Is Born Book
Perfect for Gift-Giving! One Stop Shopping for all the kids in the family.
almost $4.00! FREE
Hero Kids' Pak
Includes: A New Name for Worthless,
Puddles on the Floor, and Happy Dog! Coloring Book. The Hero Kids' Pak is the perfect classroom or home
teaching aid, allowing parents, teachers, and facilitators
to give children the 'whole Hero experience,'
Hero Kids' Pak (FREE
Hero Unchained 3 Pak
Includes: A New Name for Worthless, Unchain My Heart, and Scream Like Banshee.The Hero Unchained 3 Pak is the perfect family pack, something for everyone in the family, and teaches about chaining, being a foster parent to a dog, and how you can get involved with making a dog's life better.
Hero Unchained Pak (includes
Hero Unchained Pak Plus
Includes: A New Name for Worthless, Unchain My Heart, Puddles On The Floor and Scream Like Banshee. The Hero Unchained Pak Plus takes the family pack above and adds in Puddles on the Floor, for those who want both children's books.
Hero Unchained Pak Plus (includes
—Special Fundraiser! Signed Copies of
A Dog Named Worthless - A Hero Is Born
Special Signed Copy by the Author Rocky Shepheard.
All $10 Extra goes to support Dogs Deserve Better!
BOOK BLOGGER REVIEWS
|
CHICAGO – Nearly 1 in 20 Americans older than 50 have artificial knees, or more than 4 million people, according to the first national estimate showing how common these replacement joints have become in an aging population.
Doctors know the number of knee replacement operations has surged in the last decade, especially in baby boomers. But until now, there was no good fix on the total number of people living with them.
The estimate is important because it shows that a big segment of the population might need future knee-related care, said Dr. Daniel Berry, president of the American Academy of Orthopedic Surgeons and chairman of orthopedic surgery at the Mayo Clinic in Rochester, Minn. He was not involved in the research.
People with knee replacements sometimes develop knee infections or scar tissue that require additional treatment. But also, even artificial knees wear out, so as the operations are increasingly done on younger people, many will live long enough to almost certainly need a second or even third knee replacement.
“These data are sobering because we didn’t know what an army of people we’ve created over the last decade,” said Elena Losina, lead author of the analysis and co-director of the Orthopedics and Arthritis Center for Outcomes Research at Harvard’s Brigham and Women’s Hospital. “The numbers will only increase, based on current trends.”
Replacement joints can greatly improve quality of life for people with worn-out knees, but they’re not risk-free and it’s a major operation that people should not take lightly, she said.
Modern knee replacements in the United States date to the 1970s. Since then, advances in materials and techniques, including imaging scans to create better-fitting joints, have made the implants more durable and lifelike, surgeons say.
Losina and colleagues came up with their estimate by analyzing national data on the number of knee replacements done from 1998-2009, U.S. census data, death statistics and national health surveys.
For example, in 2009, more than 600,000 knee replacement operations were done nationwide. The study estimate includes people who had knee replacement operations that year and in previous years who are still living.
Overall, 4.5 million Americans are living with artificial knees. That includes an estimated 500,000 who have had at least two replacement operations on the same knee.
Knee replacements are most common in people older than 80 – 1 in 10 people in this age range have them, the study found. Though they’re less prevalent in people younger than that, there are still more than half a million Americans in their 50s with the artificial joints, and based on current trends, operations in that age group are expected to increase.
According to the federal Agency for Healthcare Research and Quality, knee replacements tripled in people ages 45 to 64 between 1997 and 2009.
Doctors think two trends have contributed to that increase: the nation’s obesity epidemic and amateur athletes who don’t adjust workouts to spare aging or even injured joints. Both can lead to or worsen arthritis, the main reason for replacing knees.
Donna Brent, 63, is in the latter category. The Deerfield, Ill., administrative assistant says decades of racquetball, tennis, softball and other sports took a toll on her knees, but she got used to living with the pain, even when she became bowlegged and developed a limp. When pain “started getting in the way of some of my sports,” she gave in to her doctor’s advice and had the operation last June on her right knee. She said she feels better than ever, is back to exercising and plans to resume tennis and softball when the weather warms up.
During knee replacement operations, surgeons slice off a small portion of the worn-out surface on the ends of both leg bones that meet at the knee, then implant an artificial joint usually made of plastic or metal. Typical operations last about two hours, require a few days in the hospital and cost about $40,000.
Artificial knees generally last 15 to 20 years. While some are promoted as lasting 30 years, these estimates are generally based on use among older people more sedentary than baby boomers who expect new knees to let them be as active as they were before surgery. Sometimes that’s possible, though doctors often discourage knee replacement patients from engaging in high-impact sports including jogging.
The National Institute of Arthritis, Musculoskeletal and Skin Diseases paid for the study.
|
Focus on Economic Data: Consumer Price Index and Inflation, October 19, 2011
Glossary terms from:
One of many choices or courses of action that might be taken in a given situation.
Any activity or organization that produces or exchanges goods or services for a profit.
Consumer Price Index (CPI)
A price index that measures the cost of a fixed basket of consumer goods and services and compares the cost of this basket in one time period with its cost in some base period. Changes in the CPI are used to measure inflation.
People who use goods and services to satisfy their personal needs and not for resale or in the production of other goods and services.
Spending by households on goods and services. The process of buying and using goods and services.
A sustained decrease in the average price level of all the goods and services produced in the economy.
A severe, prolonged economic contraction.
The central bank of the United States. Its main function is controlling the money supply through monetary policy. The Federal Reserve System divides the country into 12 districts, each with its own Federal Reserve bank. Each district bank is directed by its nine-person board of directors. The Board of Governors, which is made up of seven members appointed by the President and confirmed by the Senate to 14-year terms, directs the nation's monetary policy and the overall activities of the Federal Reserve. The Federal Open Market Committee is the official policy-making body; it is made up of the members of the Board of Governors and five of the district bank presidents.
Something a person or organization plans to achieve in the future; an aim or desired result.
Tangible objects that satisfy economic wants.
Spending by all levels of government on goods and services; includes categories like military, schools and roads.
Individuals and family units that buy goods and services (as consumers) and sell or rent productive resources (as resource owners).
Payments earned by households for selling or renting their productive resources. May include salaries, wages, interest and dividends.
A rise in the general or average price level of all the goods and services produced in an economy. Can be caused by pressure from the demand side of the market (demand-pull inflation) or pressure from the supply side of the market (cost-push inflation).
Money paid regularly, at a particular rate, for the use of borrowed money.
The quantity and quality of human effort available to produce goods and services.
The amount of money that people pay when they buy a good or service; the amount they receive when they sell a good or service.
The weighted average of the prices of all goods and services in an economy; used to calculate inflation.
People and firms that use resources to make goods and services.
A good or service that can be used to satisfy a want.
A process of manufacturing, growing, designing, or otherwise using productive resources to create goods or services used to to satisfy a want.
Goods, often supplied by the government, for which use by one person does not reduce the quantity of the good available for others to use, and for which consumption cannot be limited to those who pay for the good.
The amount of goods and services that a monetary unit of income can buy.
A decline in the rate of national economic activity, usually measured by a decline in real GDP for at least two consecutive quarters (i.e., six months).
Money set aside for a future use that is held in easily-accessed accounts, such as savings accounts and certificates of deposit (CDs).
Activities performed by people, firms or government agencies to satisfy economic wants.
Use money now to buy goods and services.
Standard of Living
The level of subsistence of a nation, social class or individual with reference to the adequacy of necessities and comforts of daily life.
Compulsory payments to governments by households and businesses.
An abstract measure of the satisfaction consumers derive from consuming goods and services.
Payments for labor services that are directly tied to time worked, or to the number of units of output produced.
People employed to do work, producing goods and services.
|
Rainy Day Painting
Create your very own creepy, haunted castle sitting in a turbulent field of flowing grass, eerily surrounded by dark, ominous clouds.
Fireworks are such an exciting part of part of summer festivities, but it's sad when the show is over. Keep them alive all year long with watercolor fireworks.
Show your high schooler how to celebrate Mary Cassatt, an Impressionist painter, by creating a mother-child painting in her style.
Put your individual fingerprint on the 100th Day of School (literally!) with this activity.
Show your preschooler how to make a print of a butterfly using her hand as a tool--a great way to stimulate her sense of touch.
Use marbles and paint to explore the wild world of shapes and color...and build kindergarten writing strength, too.
Introduce your kindergartener to some art history by showing him how to create an everyday object print, Andy Warhol-style.
Celebrate the changing seasons with this fun, hands-on art activity that will teach your child about the different colors of the seasons.
Help your preschooler begin reading and writing the printed word by connecting simple letter recognition exercises with this easy art project: alphabet trees!
|
/******************************************************************************/
/* */
/* Copyright (c) 2013-2015, Kyu-Young Whang, KAIST */
/* All rights reserved. */
/* */
/* Redistribution and use in source and binary forms, with or without */
/* modification, are permitted provided that the following conditions */
/* are met: */
/* */
/* 1. Redistributions of source code must retain the above copyright */
/* notice, this list of conditions and the following disclaimer. */
/* */
/* 2. Redistributions in binary form must reproduce the above copyright */
/* notice, this list of conditions and the following disclaimer in */
/* the documentation and/or other materials provided with the */
/* distribution. */
/* */
/* 3. Neither the name of the copyright holder nor the names of its */
/* contributors may be used to endorse or promote products derived */
/* from this software without specific prior written permission. */
/* */
/* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS */
/* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT */
/* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS */
/* FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE */
/* COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, */
/* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, */
/* BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; */
/* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER */
/* CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT */
/* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN */
/* ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE */
/* POSSIBILITY OF SUCH DAMAGE. */
/* */
/******************************************************************************/
/******************************************************************************/
/* */
/* ODYSSEUS/EduCOSMOS Educational Purpose Object Storage System */
/* (Version 1.0) */
/* */
/* Developed by Professor Kyu-Young Whang et al. */
/* */
/* Advanced Information Technology Research Center (AITrc) */
/* Korea Advanced Institute of Science and Technology (KAIST) */
/* */
/* e-mail: [email protected] */
/* */
/******************************************************************************/
/*
* Macro Definitions
*/
#define ERR_ENCODE_ERROR_CODE(base,no) ( -1 * (((base) << 16) | no) )
/*
* Error Base Definitions
*/
#define GENERAL_ERR_BASE 1
#define BTM_ERR_BASE 7
/*
* Error Definitions for GENERAL_ERR_BASE
*/
#define eBADCURSOR ERR_ENCODE_ERROR_CODE(GENERAL_ERR_BASE,9)
/*
* Error Definitions for OM_ERR_BASE
*/
#define eBADPARAMETER_BTM ERR_ENCODE_ERROR_CODE(BTM_ERR_BASE,0)
#define eBADBTREEPAGE_BTM ERR_ENCODE_ERROR_CODE(BTM_ERR_BASE,1)
#define eBADPAGE_BTM ERR_ENCODE_ERROR_CODE(BTM_ERR_BASE,2)
#define eNOTFOUND_BTM ERR_ENCODE_ERROR_CODE(BTM_ERR_BASE,3)
#define eDUPLICATEDOBJECTID_BTM ERR_ENCODE_ERROR_CODE(BTM_ERR_BASE,4)
#define eBADCOMPOP_BTM ERR_ENCODE_ERROR_CODE(BTM_ERR_BASE,5)
#define eDUPLICATEDKEY_BTM ERR_ENCODE_ERROR_CODE(BTM_ERR_BASE,6)
#define eBADPAGETYPE_BTM ERR_ENCODE_ERROR_CODE(BTM_ERR_BASE,7)
#define eEXCEEDMAXDEPTHOFBTREE_BTM ERR_ENCODE_ERROR_CODE(BTM_ERR_BASE,8)
#define eTRAVERSEPATH_BTM ERR_ENCODE_ERROR_CODE(BTM_ERR_BASE,9)
#define eNOSUCHTREELATCH_BTM ERR_ENCODE_ERROR_CODE(BTM_ERR_BASE,10)
#define eDELETEOBJECTFAILED_BTM ERR_ENCODE_ERROR_CODE(BTM_ERR_BASE,11)
#define eBADCACHETREELATCHCELLPTR_BTM ERR_ENCODE_ERROR_CODE(BTM_ERR_BASE,12)
#define NUM_ERRORS_BTM_ERR_BASE 13
#define eNOTSUPPORTED_EDUBTM ERR_ENCODE_ERROR_CODE(BTM_ERR_BASE,14)
|
#include <iostream>
#include <string>
#include <fstream>
#include <ros/ros.h>
#include <tf/transform_listener.h>
#include <tf/transform_broadcaster.h>
#include <cv_bridge/cv_bridge.h>
#include <sensor_msgs/image_encodings.h>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/calib3d.hpp>
#include "capture_checkerboard.h"
using namespace std;
using namespace sensor_msgs;
using namespace message_filters;
using namespace cv;
using std::vector;
void CapturePoseCheckerboard::setCameraInfo(const CameraInfoConstPtr& cam_info)
{
if (use_rect) // rgb is rectified
{
intrinsic_matrix.at<double>(0,0) = cam_info->P[0]; intrinsic_matrix.at<double>(0,1) = cam_info->P[1]; intrinsic_matrix.at<double>(0,2) = cam_info->P[2];
intrinsic_matrix.at<double>(1,0) = cam_info->P[4]; intrinsic_matrix.at<double>(1,1) = cam_info->P[5]; intrinsic_matrix.at<double>(1,2) = cam_info->P[6];
intrinsic_matrix.at<double>(2,0) = cam_info->P[8]; intrinsic_matrix.at<double>(2,1) = cam_info->P[9]; intrinsic_matrix.at<double>(2,2) = cam_info->P[10];
dist_coeff = Mat::zeros ( 1,5,CV_32F );
}
else // rgb is not raw
{
intrinsic_matrix.at<double>(0,0) = cam_info->K[0]; intrinsic_matrix.at<double>(0,1) = cam_info->K[1]; intrinsic_matrix.at<double>(0,2) = cam_info->K[2];
intrinsic_matrix.at<double>(1,0) = cam_info->K[3]; intrinsic_matrix.at<double>(1,1) = cam_info->K[4]; intrinsic_matrix.at<double>(1,2) = cam_info->K[5];
intrinsic_matrix.at<double>(2,0) = cam_info->K[6]; intrinsic_matrix.at<double>(2,1) = cam_info->K[7]; intrinsic_matrix.at<double>(2,2) = cam_info->K[8];
cam_model.fromCameraInfo ( cam_info );
dist_coeff = Mat ( cam_model.distortionCoeffs() );
}
}
void CapturePoseCheckerboard::callBack( const ImageConstPtr& rgb, const CameraInfoConstPtr& cam_info )
{
if (ros::ok())
{
setCameraInfo(cam_info); // set camera_info
input_bridge = cv_bridge::toCvCopy( rgb, image_encodings::MONO8 );
image_grey = input_bridge->image;
int flags = 0;
if ( adaptive_thresh ) flags += CV_CALIB_CB_ADAPTIVE_THRESH;
if ( normalize_image ) flags += CV_CALIB_CB_NORMALIZE_IMAGE;
if ( filter_quads ) flags += CV_CALIB_CB_FILTER_QUADS;
if ( fast_check ) flags += CALIB_CB_FAST_CHECK;
bool patternfound = findChessboardCorners ( image_grey, patternsize, image_corners, flags );
if ( patternfound )
{
cv::cornerSubPix( image_grey, image_corners, Size(subpixelfit_window_size, subpixelfit_window_size),
Size(-1, -1), TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));
cv::solvePnP( object_corners, image_corners, cv::Mat( intrinsic_matrix, cv::Rect(0, 0, 3, 3) ), dist_coeff, rotation_vec, translation_vec );
extrinsic_matrix = cv::Mat_<double>::eye (4, 4);
extrinsic_matrix(0, 3) = translation_vec(0);
extrinsic_matrix(1, 3) = translation_vec(1);
extrinsic_matrix(2, 3) = translation_vec(2);
Rodrigues ( rotation_vec, cv::Mat( extrinsic_matrix, cv::Rect(0, 0, 3, 3)), noArray() );
projection_matrix = intrinsic_matrix * extrinsic_matrix;
// generate tf model to camera
R_check = tf::Matrix3x3 ( extrinsic_matrix( 0, 0 ), extrinsic_matrix( 0, 1 ), extrinsic_matrix( 0, 2 ),
extrinsic_matrix( 1, 0 ), extrinsic_matrix( 1, 1 ), extrinsic_matrix( 1, 2 ),
extrinsic_matrix( 2, 0 ), extrinsic_matrix( 2, 1 ), extrinsic_matrix( 2, 2 ) );
t_check = tf::Vector3 ( translation_vec ( 0 ), translation_vec ( 1 ), translation_vec ( 2 ) );
transform_check = tf::Transform ( R_check, t_check );
q_check = transform_check.getRotation();
// Publish check pose
pose_check.header = rgb->header;
pose_check.pose.orientation.x = q_check.x();
pose_check.pose.orientation.y = q_check.y();
pose_check.pose.orientation.z = q_check.z();
pose_check.pose.orientation.w = q_check.w();
pose_check.pose.position.x = t_check.x();
pose_check.pose.position.y = t_check.y();
pose_check.pose.position.z = t_check.z();
check_pose_pub.publish(pose_check);
// Deubgging the output rotation matrix of the cam pose
// std::cout << "Cam Rotation" << std::endl << R_cam[0][0] << " " << R_cam[0][1] << " " << R_cam[0][2] << std::endl
// << R_cam[1][0] << " " << R_cam[1][1] << " " << R_cam[1][2] << std::endl
// << R_cam[2][0] << " " << R_cam[2][1] << " " << R_cam[2][2] << std::endl;
// std::cout << "Cam Translation" << std::endl << t_cam.x() << " " << t_cam.y() << " " << t_cam.z() << std::endl;
double nr_of_square = std::max ( col_num, row_num );
double size = square_size * nr_of_square;
int font = cv::FONT_HERSHEY_SIMPLEX;
double fontScale = 1.0;
double thickness = 1.0;
double lineType = CV_AA;
double lineThickness = 3;
input_bridge = cv_bridge::toCvCopy( rgb, image_encodings::BGR8 );
// ROS_INFO("Finish converting to BGR...\n");
image_rgb = input_bridge->image;
cv::Mat_<double> Pi0 = projection_matrix * ( cv::Mat_<double> ( 4,1 ) << 0, 0, 0, 1 );
cv::Point2d pi0 ( Pi0 ( 0,0 ) / Pi0 ( 0,2 ), Pi0 ( 0,1 ) / Pi0 ( 0,2 ) );
cv::circle ( image_rgb, pi0, 3, CV_RGB ( 255,255,255 ) );
cv::Mat_<double> Pi1 = projection_matrix * ( cv::Mat_<double> ( 4,1 ) << size, 0, 0, 1 );;
cv::Point2d pi1 ( Pi1 ( 0,0 ) / Pi1 ( 0,2 ), Pi1 ( 0,1 ) / Pi1 ( 0,2 ) );
cv::circle ( image_rgb, pi1, 3, CV_RGB ( 255,0,0 ) );
putText ( image_rgb, "X", pi1, font, fontScale, CV_RGB ( 255,0,0 ), thickness, CV_AA );
cv::line ( image_rgb, pi0, pi1, CV_RGB ( 255,0,0 ), lineThickness );
cv::Mat_<double> Pi2 = projection_matrix * ( cv::Mat_<double> ( 4,1 ) << 0, size, 0, 1 );
cv::Point2d pi2 ( Pi2 ( 0,0 ) / Pi2 ( 0,2 ), Pi2 ( 0,1 ) / Pi2 ( 0,2 ) );
cv::circle ( image_rgb, pi2, 3, CV_RGB ( 0,255,0 ) );
putText ( image_rgb, "Y", pi2, font, fontScale, CV_RGB ( 0,255,0 ), thickness, CV_AA );
cv::line ( image_rgb, pi0, pi2, CV_RGB ( 0,255,0 ), lineThickness );
cv::Mat_<double> Pi3 = projection_matrix * ( cv::Mat_<double> ( 4,1 ) << 0, 0, size, 1 );
cv::Point2d pi3 ( Pi3 ( 0,0 ) / Pi3 ( 0,2 ), Pi3 ( 0,1 ) / Pi3 ( 0,2 ) );
cv::circle ( image_rgb, pi3, 3, CV_RGB ( 0,0,255 ) );
putText ( image_rgb, "Z", pi3, font, fontScale, CV_RGB ( 0,0,255 ) , thickness, CV_AA );
cv::line ( image_rgb, pi0, pi3, CV_RGB ( 0,0,255 ), lineThickness );
drawChessboardCorners ( image_rgb, patternsize, Mat ( image_corners ), patternfound );
// Publish pose estimation result
pose_result_msg = cv_bridge::CvImage(std_msgs::Header(), "bgr8", image_rgb).toImageMsg();
pose_result_msg->header = rgb->header;
pose_result_pub.publish(pose_result_msg);
}
else
{
ROS_INFO("Cannot find all the corners of the chessboard...");
}
}
}
int main( int argc, char** argv )
{
ros::init( argc, argv, "capture" );
string check_pose_topic = "/capture/pose_check";
string pose_result_topic = "/capture/pose_result";
// // PrimeSense
// string rgb_topic = "/camera/rgb/image_rect_color";
// string rgb_cam_info_topic = "/camera/rgb/camera_info";
// Intel Realsense
string rgb_topic = "/camera/color/image_raw";
string rgb_cam_info_topic = "/camera/color/camera_info";
bool use_rect = true;
CapturePoseCheckerboard CPC(check_pose_topic,
pose_result_topic,
rgb_topic,
rgb_cam_info_topic,
use_rect);
ros::spin();
return 0;
}
|
Open Access
Green Synthesis, Characterization and Uses of Palladium/Platinum Nanoparticles
Nanoscale Research Letters201611:482
Received: 4 June 2016
Accepted: 19 October 2016
Published: 2 November 2016
Biogenic synthesis of palladium (Pd) and platinum (Pt) nanoparticles from plants and microbes has captured the attention of many researchers because it is economical, sustainable and eco-friendly. Plant and their parts are known to have various kinds of primary and secondary metabolites which reduce the metal salts to metal nanoparticles. Shape, size and stability of Pd and Pt nanoparticles are influenced by pH, temperature, incubation time and concentrations of plant extract and that of the metal salt. Pd and Pt nanoparticles are broadly used as catalyst, as drug, drug carrier and in cancer treatment. They have shown size- and shape-dependent specific and selective therapeutic properties. In this review, we have discussed the biogenic fabrication of Pd/Pt nanoparticles, their potential application as catalyst, medicine, biosensor, medical diagnostic and pharmaceuticals.
Biogenic fabrication Herbal extract Phytochemicals Metal nanoparticles Cancer
The main aim of green synthesis is to minimize the use of toxic chemicals to prevent the environment from pollution. The biogenic routes for the fabrication of nanomaterials are therefore becoming more and more popular.
The three main conditions for nanomaterials preparation are (i) the choice of environment-friendly solvent medium, (ii) reducing agent and (iii) a nontoxic material for their stabilization. Nanomaterials fabricated from plants, fungi and bacteria have several potential applications in all fields of science and technology [110]. The reduction of metal ions occur by the proteins, amines, amino acids, phenols, sugars, ketones, aldehydes and carboxylic acids present in the plants and microbes. The geometrical shape, size and stability of nanoparticles may be controlled by monitoring the pH, temperature, incubation time and concentrations of plant extract and that of the metal salt.
Both palladium and platinum are high-density silvery white precious metals. Biogenic fabrication of palladium and platinum nanoparticles using various plant species such as Anogeissus latifolia, Cinnamom zeylanicum, Cinnamomum camphora, Curcuma longa, Doipyros kaki, Gardenia jasminoides, Glycine max, Musa paradisica, Ocimun sanctum, Pinus resinosa and Pulicaria glutinosa have been reported. The properties of fabricated palladium and platinum nanoparticles using various plants parts are summarized in Table 1 and Figs. 1 and 2. They are employed both as heterogeneous and homogeneous catalysts due to their large surface-to-volume ratio and high surface energy [11]. They are used in many medical diagnoses without destructing the DNA structure [12]. Palladium and platinum nanoparticles fabricated from herbal extracts have been examined for their heterogeneous catalytic activity in Suzuki–Miyaura coupling reaction [13]. Since it is a ligand-free catalytic reaction, it can be easily carried out in an aqueous medium in open without the fear of dissociation. The yield is very high even with one mole % palladium and platinum nanoparticles under ordinary condition.
Table 1
Important example of phytosynthesis of palladium and platinum nanoparticles with their size and shape
Part used
Size (nm)
Anogeissus latifolia
Azadirachta indica
Small and large spheres
Cinnamom zeylanicum
Cinnamomum camphora
Curcuma longa
Doipyros kaki
Euphorbia granulate
Gardenia jasminoides
Glycine max
Moringa oleifera
Waste petal
Moringa oleifera
Peel extract
27 ± 2
Musa paradisica
Peeled banana
Crystalline irregular
Ocimun sanctum
Pulicaria glutinosa
Whole plant
Crystalline and spherical
Pinus resinosa
Pinus resinosa
Prunus x yedoensis
Fig. 1
Biogenic synthesis of palladium and platinum nanoparticles
Fig. 2
Application of palladium and platinum nanoparticles
In this review, we have discussed the biosynthesis of palladium/platinum nanoparticles and their characterization using scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD), UV-vis and Fourier transform infrared (FTIR) spectroscopy. In addition, their application in catalysis, treatment of cancer and other disciplines of biological sciences has been assessed.
Biosynthesis of Palladium Nanoparticles
When an aqueous solution of [Pd(OAc)2] was stirred with a methanolic extract of Catharanthus roseus for 1 h at 60 °C, a change in colour occurred. It showed absorption peak in 360–400 nm range in UV-visible spectrum which corresponds to spherical palladium nanoparticles of ~40 nm. C. roseus extract is a mixture of eight compounds containing –OH groups which reduce the metal ion to metal nanoparticles.
$$ \mathrm{P}\mathrm{d}{\left({\mathrm{CH}}_3\mathrm{C}\mathrm{O}\mathrm{O}\right)}_2+\mathrm{Reducing}\kern0.5em \mathrm{extract}\to \mathrm{P}\mathrm{d}+2{\mathrm{CH}}_3\mathrm{CO}\mathrm{O}\mathrm{H} $$
Synthesis, characterization and application of palladium nanoparticles as photocatalytic agent have been reported [14, 15]. The degradation of phenol red by palladium nanoparticles has been investigated. The nanoparticles were added to phenol red and stirred at room temperature at varying pH (2–10). The surface plasmon resonance (SPR) band of dye at 433 nm disappeared at pH 6 showing the degradation of phenol red [15].
Palladium nanoparticles synthesized from aqueous leaf extract of Hippophae rhamnoides have been reported [13]. They have been characterized by SEM, TEM, XRD, UV-vis and FTIR spectroscopy. The presence of polyphenols indicated that they act as reducing and capping agents for the palladium nanoparticles. The particle size ranged between 2.5 and 14 nm and most of them were spherical. Their catalytic activity as heterogeneous catalyst was evaluated for Suzuki–Miyaura coupling reaction in water under lignin-free conditions. Iodobenzene with phenylboronic acid in the presence of palladium nanoparticles at 100 °C in alkaline medium gave 100 % yield of the product. Different aryl halides with phenylboronic acid were tried, and all of them gave the corresponding compounds in high yield (91–95 %).
Momeni and Nabipour [16] used Sargassum bovinum alga for palladium nanoparticles fabrication. Authors observed the conversion of palladium ions into metallic palladium using UV-vis spectroscopy in the range of 300–800 nm (Fig. 3). Change in colour from yellow to dark brown indicated the formation of palladium nanoparticles. Figure 3 represents the absorption spectra of palladium nanoparticles after 24 h of reduction from the crude extract and compared with those of PdCl2 solution. The palladium nanoparticles of 5–10 nm were checked for catalytic activity by electrochemical reduction of H2O2. Since they were stable up to 5 months, it was believed that they were stabilized by polysaccharides present in the algal extract. The reduction of H2O2 was also confirmed by cyclic voltametry.
Fig. 3
UV-vis spectroscopy of (a) PdCl2 solution and (b) palladium nanoparticles after reduction by crude extract of Sargassum bovinum at 60 °C for 24 h. The inset shows an image of the as-prepared Pd colloidal solution and the PdCl2 solution before reaction [16]
Bimetallic nanoparticle with core-shell structure and shape-controlled synthesis has been reported for Au@Pd nanoparticles [17, 18]. To a reduced gold nanoparticle, another metal was added and subsequently reduced chemically or by plant extract containing a mild reducing agent (Cacumen platycladi leaf extract). The gold nanoparticles were enveloped by the second metal nanoparticles giving a particular shape which depends on the arrangement of the second metal nanoparticles around gold. The bimetallic flower-shaped Au@Pd nanoparticles can be seen from a dark central core surrounded by a light colour shell. Their average size ranged between 47.8 ± 2.3 nm with face-centred cubic structure [19].
Green synthesis of Pd/Fe3O4 nanoparticles from Euphorbia condylocarpa M. bieb root extract and their catalytic activity have recently been reported [20]. The extract contains flavonoids which provide electrons for the reduction of metal ions. The Fe3O4/Pd is a good catalyst and can be used for several cycles for Sonogashira and Suzuki coupling reactions without loss of activity, but Fe3O4 is highly sensitive to air. Since Pd and Fe are magnetic, they were recovered from the reaction mixture by a magnet and recycled several times for Sonogashira coupling reaction with negligible loss of activity (Fig. 4).
Fig. 4
Reusability of Pd/Fe3O4 nanoparticles for Sonogashira coupling reaction [20]
Biosynthesis of palladium nanoparticles on reduced graphene oxide using barberry fruit extract and their application as a heterogeneous catalyst for the reduction of nitroarenes to amines has been done at 50 °C in 1:2 alcohol–water mixture [21]. Vitamin C appears to be a major phytochemical in the extract, and therefore, it reduced the metal ions to nanoparticles. The average size of palladium nanoparticles was found to be nearly 18 nm. Catalytic activity was determined by the reduction of nitrobenzene to aniline with NaBH4. The reduction occurs on the surface of catalyst and depends on the speed of absorption of nitrocompounds on the active site of the catalyst. The process is complicated but occurs stepwise. Adsorption of H2- and nitrocompounds, followed by electron transfer from BH4 unit to nitroderivative and finally, desorption of amino compounds from the surface of catalyst occur. The catalyst can be used for five cycles without significant loss of activity.
Very recently, palladium nanoparticles were synthesized from Salvadora persica root extract. Extract was found to contain polyphenols which acted both as bioreductant and stabilizing agent [22]. The average nanoparticles of 10 nm at 90 °C were obtained which was ascertained from the loss of colour and disappearance of an absorption band at 415 nm in UV-vis spectrum of the colloidal solution.
Palladium nanoparticles have been synthesized from C. zeylanicum bark extract and PdCl2 at 30 °C [23]. Although, reaction started after 24 h, it was completed after 72 h. The nanoparticles were polydispersed, spherical in shape ranging between 15 and 20 nm. Their formation was dependent on the increasing concentration of leaf extract. The XRD pattern confirmed the presence of crystalline palladium. The effect of pH on the formation of nanoparticles is insignificant, but precipitation occurs above pH 5. However, it does not influence the shape of nanoparticles but slightly affect their size [24]. It was noticed that nearly 60 % of PdCl2 was reduced to palladium nanoparticles when only 5-ml extract was treated with 50 ml of 1 mM PdCl2 at 30 °C. Higher concentration of the biomaterial may reduce the remaining 40 % PdCl2; otherwise, the suspension would contain both the Pd2+ ions and palladium nanoparticles. The C. zeylanicum bark extract is known to contain linalool, eugenol, methyl chavicol, cinnamaldehyde, ethyl cinnamate and β-caryophyllene [25] which have distinct aroma and convert Pd ions to Pd nanoparticles. However, no clear mechanism has been given for the reduction process of PdCl2 to Pd nanoparticles.
Sathishkumar et al. [26] have reported the biosynthesis of palladium nanoparticles from C. longa extract. The nanoparticles of 10–15 nm are believed to be formed by a redox process involving polyphenols as the reducing agent. They were found to be stable even after 3 months. The pH of the solution had almost negligible effect on the formation of nanoparticles, but size increases with pH.
Green synthesis of palladium nanoparticles from dried fruit extract of G. jasminoides Ellis has been achieved at 60 °C after 1.5 h of incubation [27]. Formation of nanoparticles was indicated by a change in colour from orange to dark brown. The extract had three distinct absorption peaks at 238, 322 and 440 nm corresponding to geniposide [28], chlorogenic acid [29] and crocins/crocetin [30], respectively. These compounds are antioxidants [3032] and contain carbonyl, carboxyl and hydroxyl groups. The orange colour was mainly due to crocins/crocetin which disappeared after 1.5 h although the other absorptions did not change even after 12 h at this temperature. The XRD pattern showed the presence of face-centred cubic structure of Pd0, with 3.9-nm diameter. The FTIR spectra showed the presence of various functional groups. Some new peaks were detected after the reduction of Pd2+ to Pd0. Since all Pd2+ ions are not completely reduced, the appearance of new peaks was attributed to its coordination with the carbonyl compounds present in the extract. The TEM images showed spherical, rod and three-dimensional polyhedral structures at 40 °C, but they vary with increasing temperature. The smaller particles are nicely dispersed at 70 °C while the larger ones are agglomerated. Normally, the particles size varies between 4.47 and 13.63 nm at temperature between 40 and 90 °C although more than 75 % of the palladium nanoparticles were 3–5-nm diameter.
Biogenic synthesis of palladium nanoparticles has been done from degradable banana peel extract and characterized them via UV-vis, IR, SEM and XRD [33]. The peel extract powder reacted with PdCl2 at 80 °C for 3 min in water. The UV-vis spectra of all mixtures showed a peak at 400 nm, but after the reduction of Pd2+ to Pd0, the peaks were either shifted or disappeared with a constant change in colour from yellow to red due to excitation of surface plasmon vibration in the palladium nanoparticle. The SEM images showed nanoparticles and aggregates. After accumulation, the dendrites are formed which look like a beautiful flower twig. However, at higher magnification, dendrites are shown to be composed of microcubes, nicely arranged as a motif (Fig. 5a–d). The average size of palladium nanoparticle was 50 nm. The FTIR spectral data showed the presence of carboxyl, amino and hydroxyl groups which are supposed to be active ingredients for the reduction of PdCl2.
Fig. 5
Scanning electron micrographs of a palladium nanoparticles. b– d Microwire networks at the periphery due to coffee ring effect. a Magnification: ×10,000, inset bar: 1 μm. b Magnification: ×200, inset bar: 100 μm. c Magnification: ×1000, inset bar: 10 μm. d Magnification: ×4500, inset bar: 5 μm [33]
Petla et al. [34] have reported the synthesis of palladium nanoparticles from soybean leaf (G. max) extract. Although, the reduction started after 5 min, the characteristic absorption peak at 420 nm for Pd2+ disappeared completely after 48 h indicating complete conversion of Pd2+ → Pd0. The TEM micrograph showed the formation of uniform spherical particles of ~15 nm. The authors claim that only 8 out of 20 essential amino acids are IR active and they reduce the Pd2+ ions. They have misunderstood the fundamental basis of IR spectroscopy that any molecule which can exhibit a change in dipole moment can be IR active. It is therefore suggested that all amino acids and proteins are IR active and some of them may act as reducing agents.
Biosynthesis of palladium nanoparticles from P. glutinosa plant extract has been done at 90 °C after stirring the mixture of PdCl2+ extract for 2 h [35]. A change in colour from light yellow to dark brown showed the formation of palladium nanoparticles which was confirmed by UV-vis spectral study. TEM micrograph showed palladium nanoparticles of 20–25-nm diameters covered with organic layer from extract which act as the capping agent as well as reducing agent. The IR spectrum of the plant indicated the presence of flavonoids and polyphenols. Their catalytic activity was examined in Suzuki reaction of bromobenzene with phenylboronic acid (Fig. 6) in an aqueous medium [36] without prior activation [37] in the presence of SDS and K3PO4 under anaerobic conditions. Biphenyl was obtained when only 5 mol % of palladium nanoparticles were used as catalyst. Nearly 60 % conversion was achieved within first 1 min and was completed only in 4 min; the reaction was fast and effective.
Fig. 6
a Schematic representation of the Suzuki reaction of bromobenzene with phenylboronic acid under aqueous conditions. b Time-dependent conversion efficiency of the Suzuki reaction of bromobenzene with phenylboronic acid under aqueous and aerobic conditions determined by GC analysis [35]
Recently, palladium nanoparticles from Arabidopsis plant culture and K2PdCl4 were prepared [38]. The reduction was complete in 24 h. TEM images of different sections of the plant showed well-dispersed spherical metallic nanoparticles of an average diameter of 3 nm during first 3 h. As the incubation time increased, the size and concentration of nanoparticles also increased up to 32 nm. They were distributed uniformly in the apoplast regions. Plant had attained maximum palladium concentration after 18 h. The mechanism underlying the reduction of Pd2+ ion to elemental Pd inside the plant system is not yet clear. However, the binding of Pd2+ ions to carboxyl, amino and sulfhydryl groups, present in the plant, prior to the formation of nanoparticles is one of the likely steps. The authors have not found any enzyme in plants, and therefore, it is suggested that reduction of metal is a chemical rather than biological process. A chemical-based reduction process was carried out using a single chemical in an isolated system whereas biological reductions occur in the presence of biomolecules in a biological system such as plants or microbes. The conversion is a redox process irrespective of the chemical or biological system used.
Besides biosynthesis of palladium nanoparticles, Kora and Rastogi [39] have studied its properties as antioxidant and as catalyst. A water soluble plant gum polymer, gum ghatti (A. latifolia), was allowed to react with PdCl2 at 121 °C and 103 K Pa for 30 min which showed a change in colour followed by the disappearance of absorption peak at 427 nm in UV-vis region. The nanoparticles were spherical in shape and polydispersed, and the average size ranged between 4.8 ± 1.6 nm. Hydroxyl and carboxyl groups of the gum are supposed to be bonded to Pd2+ ions in the beginning which subsequently reduce and also stabilize them. The nanoparticles are believed to be stabilized and capped by proteins and polysaccharides of the gum. The present protocol of palladium nanoparticles synthesis is superior to other similar methods [33, 40] because it takes little time and produces nanoparticles of very small size (4.8 nm). Homogeneous catalytic activity of palladium nanoparticles was investigated by the reduction of dyes, for instant coomassie brilliant blue G-250, methylene blue, methyl orange and 4-nitrophenol with NaBH4. The characteristic absorption peaks for coomassie brilliant blue at 588 nm was monitored during palladium nanoparticles catalysed NaBH4 reduction. The dye decolorised within 2 min with the disappearance of the above peak showing its complete reduction in such a short span of time. Reduction of methylene blue has also been studied in the same way. Its characteristic absorption at 664 and 612 nm disappeared, and the dye became colourless showing its reduction to colourless leuco methylene blue. Similarly, methyl orange peak at 462 nm also vanished during reduction process. The reduction of 4-nitrophenol to 4-aminophenol was also monitored by examining the absorption at 318 nm which was shifted red to 400 nm due to the formation of nitrophenolate ions in the presence of NaBH4. With the addition of palladium nanoparticles, the intensity of the peak at 400 nm diminished with concurrent emergence of a new absorption peak at 294 nm indicating the reduction of nitrophenol to aminophenol. The conversion was also visible by the disappearance of yellow colour. The reduction of all above dyes is thermodynamically favoured but kinetically hindered due to large potential difference between donor and acceptor molecules. However, in the presence of palladium nanoparticles, both these substances are adsorbed on its surface and nanoparticles facilitate the transfer of electrons from the reductant NaBH4 to substrate oxidant. Since palladium nanoparticles act as redox catalyst, they decrease the activation energy of the ensuing reaction via electron relay effect [41].
Synthesis of palladium nanoparticles from Moringa oleifera biomass containing bis-phthalate as a natural reducing and capping agent has been reported. Their average size ranged between 10 and 50 nm. They were spherical, well dispersed and did not show any aggregation [42]. TEM studies showed a smaller size of palladium nanoparticles stabilized by the phytochemicals. It was also confirmed by the Zeta potential and GC-MS. M. oleifera peel extract has also been used for palladium nanoparticle fabrication [43]. They were characterized by UV-vis spectroscopy, XRD, SEM and HR-TEM studies.
Palladium nanoparticles synthesized from Euphorbia granulate leaf extract have been used as a heterogeneous catalyst for the phosphine-free Suzuki–Miyaura coupling reaction at room temperature [44]. TEM micrograph showed that palladium nanoparticles were 25–35 nm in size.
Biosynthesis of palladium nanoparticles has been done from Prunus x yedoensis leaf extract and characterized by UV-vis, XRD, FTIR, HR-TEM and SAED [45]. Formation of palladium nanoparticles was confirmed from a change in colour from light yellow to dark brown. Manikandan et al. [45] have suggested the optimization parameters for the production of palladium nanoparticles, i.e. pH 7, 40:5 Pd(II): leaf extract, 3 mM Pd(II) and 30-min time. The UV-vis spectrum showed an absorption peak at 421 nm, XRD peak (2θ = 42.5°). XRD pattern confirmed the crystalline nature of the palladium nanoparticles. TEM images showed the particle size (50–150 nm) and their spherical shape. The FTIR spectrum of Prunus x yedoensis leaf extract (Fig. 7) showed the presence of alcohol, ethers, esters, carboxylic acids and amino acids [15, 4648] which acted as the reducing agent to convert palladium ion to palladium nanoparticles.
Fig. 7
Characterization of palladium nanoparticles using FTIR studies [45]
Application of Palladium Nanoparticles
Palladium adsorbs about 1000 times its own volume of hydrogen when heated to dull redness. Their catalytic activity is due to the dissociation of molecular hydrogen into atomic state: H2 → 2H.
Palladium nanoparticles doped with chitosan–graphene have been employed as biosensor for glucose estimation [36]. Palladium nanoparticles on graphene oxide have also been used as recyclable heterogeneous catalyst for the reduction of nitroarenes using sodium borohydride. Since the recovered catalyst can be used for five cycles, it can be used on a large-scale reduction of nitroarenes. It has also been used in the reduction of methylene blue, methyl orange and nitrophenol. The nanoparticles exhibited excellent degradation of the above dyes, and therefore, they can be used to treat the affluents containing dyes. Both palladium and platinum are extensively used in oxidative addition and reductive elimination of hydrogen. Platinised asbestos is used in many catalytic [49] reactions. For instance, (i) in the contact process for the manufacture of H2SO4, (ii) in Ostwald process for the oxidation of NH3 to NO for the manufacture of HNO3, (iii) oxidation of methanol to formaldehyde and (iv) decomposition of hydrazine to nitrogen and ammonia. Platinum-gold dendrimer-like nanoparticles supported on polydopamine graphene oxide reduce nitrophenol to aminophenol [50]. The ability to catalyse the reduction depends on platinum to gold ratios.
Palladium nanoparticles have been fabricated from S. persica root extract, and their catalytic activity was examined in the Suzuki coupling reactions of aryl halides with benzeneboronic acid in water to biphenyl [22]. The efficiency of the conversion rate as a function of time and yield follows the order iodobenzene > bromobenzene > chlorobenzene, although the major conversion occurred in the first 2 min. The palladium nanoparticles as catalyst can be successfully reused for only three cycles. In another study, Myrtus communis leaf extract was used for the production of Pd/TiO2 nanoparticles [51]. Authors have demonstrated that Pd/TiO2 nanoparticles as a highly efficient, stable and recyclable catalyst for the ligand-free Suzuki–Miyaura coupling reaction.
Biosynthesis of palladium nanoparticles from dried fruit extract of G. jasminoides Ellis has been investigated for its catalytic activity by the hydrogenation of p-nitrotoluene to p-toluidine and subsequently to p-methyl-cyclohexylamine [27]. It is interesting to note that conversion of p-toluidine was 100 %, but second reduction was only 26 % at 80–90 °C. The palladium nanoparticles had been recycled five times without their agglomeration.
Kora and Rastogi [39] have used A. latifolia for the biosynthesis of palladium nanoparticles and demonstrated their antioxidant and catalyst potential. In many studies, palladium nanoparticles were used as catalyst for Suzuki–Miyaura reactions to synthesize pharmaceutical intermediates and other important chemicals. Palladium nanoparticles containing plant material was heated to 300 °C which contained 18 % Pd2+ or PdO but no palladium nanoparticles. The material Pd-300 was used as a catalyst for Suzuki–Miyaura reactions. High yield of aryliodides and arylbromides [52] were obtained which were higher than palladium nanoparticles used for similar reactions. This catalyst is far superior to commercially available palladium catalyst (10 % Pd/C) and Pd(OAc)2 and, hence, can be used as a potential catalyst of future.
Sheny et al. [52] have reported the biosynthesis of palladium nanoparticles from dried leaf powder of Anacardium occidentale at pH 6–9. TEM images showed irregular rod-shaped particles which were crystalline. They have observed that the quantity of leaf powder plays a vital role in determining the size of palladium nanoparticles. FTIR spectrum of the suspension suggested the presence of secondary metabolites having hydroxyl group which reduced Pt(IV) ions to palladium nanoparticles. These palladium nanoparticles exhibited catalytic activity in the reduction of aromatic nitrocompounds.
Biosynthesis of Platinum Nanoparticles
Platinum nanoparticles from tea polyphenol acting both as reducing and capping agent have been fabricated [52]. These functionalized nanoparticles of 30–60 nm were crystalline in nature with face-centred cubic structure. TEM images showed that the capped nanoparticles were flower shaped. Tea polyphenols are known to contain a number of phenolic compounds which can form complexes with metal ions and subsequently reduce them to nanoparticles of different shapes and sizes [5, 53, 54].
Biosynthesis of platinum nanoparticle pellets using O. sanctum leaf broth was achieved at 100 °C in 1 h [55]. The reduction was quantitative and identified by a change in colour from yellow to brown and finally black indicating reduction in successive steps as shown below:
$$ \begin{array}{cc}\hfill {\mathrm{Pt}}^{4+}\overset{2\mathrm{e}}{\to}\hfill & \hfill {\mathrm{Pt}}^{2+}\overset{2\mathrm{e}}{\to}\mathrm{P}\mathrm{t}{}^{\circ}\hfill \end{array} $$
Ascorbic acid and terpenoids are known to be present in the O. sanctum leaf extract which act as reducing as well as stabilizing agents. The average particle size was found to be 23 nm. The energy dispersive absorption X-ray spectroscopy (EDAX) showed net 71 % platinum while XRD indicated the presence of PtO2, K2(PtCl4), Pt and PtCl2 (Fig. 8a, b).
Fig. 8
a EDAX spectrum. b XRD analysis of the reduced platinum from Ocimum sanctum leaf broth [55]
A facile route for the synthesis of Pt–Au alloy nanoparticles supported on polydopamine-functionalized graphene has been reported by Ye et al. [50]. Their catalytic activity against 4-nitrophenol reduction has also been studied. Platinum exhibited higher catalytic activity than those of platinum nanoparticles deposited on reduced graphene sheets (RGO). Ascorbic acid has been used as a reducing agent instead of any natural source for nanoparticle fabrication, and therefore, this method cannot be termed “green”. It was shown earlier that the multifunctional polymer disperses the reduced graphene oxide in aqueous solution and the functional groups in the biopolymer are then bonded to metal ions and metal nanoparticles. Ye and co-workers [50] had suggested that in the case of reduced graphene oxide, coated with polydopamine, PDA/RGO containing amine and catechol groups act as a reducing agent for PdCl4 2−/HAuCl4, followed by the reduction of ascorbic acid and production of Pt-Au-PDA/RGO. It is not a convincing hypothesis. When the functional groups on the biopolymer act as the reducing agent, obviously there is no need of ascorbic acid as a secondary reductant for the production of nanoparticles from H2PdCl4/HAuCl4. Besides, how ascorbic acid is reduced when it is a well-known reducing agent. Naturally, the PDA would act as a stabilizer and ascorbic acid as a reducing agent. The catalytic activity of monometallic Pt-PDA/RGO or Au nanoparticle is two to four times lower than those of bimetallic nanoparticles. During the reduction of 4-nitrophenol by NaBH4, the electron transfer from BH4 to 4-nitrophenol occurred when both are adsorbed on the surface of the catalyst. Interestingly, it has been noted that the 4-nitrophenol is preferentially adsorbed on Au [56, 57].
One-pot synthesis of platinum and palladium nanoparticles has been reported from natural lignin and fulvic acid in water at pH 7 at 80 °C under aerobic conditions [58]. These polymers act both as reducing and stabilizing agents. The formation of platinum nanoparticles with lignin was followed by UV-vis spectra which showed the disappearance of a characteristic peak for Pt4+ at 257 nm after 4 h with a consequent change in colour from orange to dark brown. The formation of platinum nanoparticles with fulvic acid showed a band at 280 nm due to the presence of phenolic group in it. The NMR spectra showed the presence of PtCl6 2− and PtCl5(H2O) species which slowly disappear as a result of the formation of nanoparticles. TEM images showed platinum nanoparticles of irregular size which form clusters. Their average size ranged between 6 and 8 nm in diameter. Palladium nanoparticles formed with lignin and fulvic acids were always spherical and larger than platinum nanoparticles. They were of 16- to 20-nm diameter. Both platinum and palladium nanoparticles were investigated for their catalytic efficiency for the reduction of 4-nitrophenol to 4-aminophenol in the presence of NaBH4. The absorption peak of nitrophenol at 399 nm was diminished, and a new absorption band corresponding to 4-aminophenol appeared at 292 nm after 15 min.
Platinum nanoparticles have been prepared from polyols in the presence of poly vinyl pyrrolidone which stabilized the nanoparticles and prevented their aggregation. They were of 5–7- and 8–12-nm diameter with cubic, hexagonal, square and tetrahedral shapes [59]. Time and temperature are controlling factors for the nanoparticle formation. AgNO3 was added to the mixture of H2PtCl6 and polyols to control the size and shape of platinum nanoparticles.
Extracellular synthesis of platinum nanoparticles of 2–12 nm from leaf extract of D. kaki has been reported at 95 °C using H2PtCl6·6H2O as precursor [60]. Formation of nanoparticles was confirmed by a change in colour which had an absorption at 477 nm. It was also noted that 95 °C was the optimum temperature for the reduction of Pt4+ to Pt nanoparticles [61]. Also, the size of nanoparticles decreased with increasing temperature, perhaps due to the increased rate of reduction. The reduction is believed to be done by terpenoids and reducing sugars present in the leaf extract.
Thirumurugan et al. [62] have reported the biosynthesis of platinum nanoparticles from Azadirachta indica extract. TEM studies indicated the formation of polydispersed nanoparticles of small to large spheres (5–50 nm). The rate of platinum nanoparticle fabrication was increased with the increase in the reaction temperature. FTIR spectrum showed sharp peaks at 1728.22, 1365.60 and 1219.01 cm−1 corresponding to the presence of carbonyls, alkanes and aliphatic amines, respectively. A. indica leaf broth was believed to contain the terpenoids which act as the reducing agent as well as stabilizer for the nanoparticles [60].
Application of Platinum Nanoparticles
Platinum-based nanomaterials have been shown as excellent therapeutic agents [6370]. Platinum compounds such as cis-platin, carboplatin and oxaliplatin are frequently used in chemotherapy especially in the treatment of ovarian and testicular tumours [71].
Since platinum group compounds are cytotoxic, the tea capped platinum nanoparticles were investigated for their toxic behaviour towards human cancer cells. It was also important to examine if these are toxic to both the healthy and cancer cells similar to the platinum complexes such as cis-platin and carboplatin used in the treatment of cancer. They have many side effects like nausea, vomiting, nephrotoxicity, neurotoxicity, ototoxicity, hematuria and aloepecia. Cervical cancer cells (SiHa) were therefore treated with different concentrations of tea capped platinum nanoparticles. The influence on cell viability, nuclear morphology and cell cycle distribution showed that the proliferation of SiHa cells was inhibited by platinum nanoparticles. The tea polyphenol capped platinum nanoparticles exhibited excellent viability at concentration between 12.5 and 200 μgml−1 for 24 and 48 h. A significant dose dependent decrease in cell viability was noticed with increasing concentration of nanoparticles. When the concentration is enhanced, the surface area is also enhanced along with the large size of the tea polyphenol. The particle size and their agglomeration are equally responsible for the cytotoxicity of platinum nanoparticles [72].
Effect of tea polyphenol capped nanoparticles on nuclear morphology and their fragmentation has also been investigated to understand the mode of apoptosis. The fluorescence microscopic image of nanoparticles in treated and placebo SiHa cells showed deformation and fragmentation of chromatin during 24 and 48 h. However, Jensen et al. [73] and Smitha et al. [74] have shown that cell death induced by nanoparticles is solely dependent on their size, shape and surface area. Tea catechin compounds exhibit cytostatic properties in tumour cells [75, 76] and induce apoptosis in U937 cells and in human colon cancer (HCT116) cells [77]. Catechin hydrate exhibits anticancer effects by blocking the proliferation of MCF7 cells and inducing apoptosis [78].
Although platinum alloys have been used in the coronary artery disease, neuromodulation devices and catheters, [79] they are not selective for cancer because they influence both the normal cells and cancer cells, leading to many complications. Functionalized platinum nanoparticles have shown size- and shape-dependent specific and selective therapeutic properties [64, 67, 80]. In many cases, platinum nanoparticles containing other organic substances have also been used as pro-drug [67, 70, 81]. Manikandan et al. [82] have shown that small platinum nanoparticles (5–6 nm) are biocompatible and exhibit apoptosis-inducing properties [49, 83]. This ability is enhanced manifold when they are coated with polymers or fortified with phytochemicals. For instance, the herbal extracts, generally used for green synthesis of nanoparticles, contain phenols, sugars and acids which act as reducing as well as stabilizing agents. Such phytochemicals in combination with cis-platin synergise apoptosis in breast cancer and cervical cancer [52, 72, 84]. A combination of platinum nanoparticles with ion irradiation has been found to enhance the efficiency of cancer therapy [85].
One-pot biogenic synthesis of palladium and platinum nanoparticles from herbal extracts, algae and fungi can be done under moderate conditions. A variety of nontoxic nanoparticles with different shape and structural motifs (spheres, rods and rings) can be fabricated and stabilized. Further, their optimization may be done by controlling the pH, temperature, incubation time and concentrations of plant extract and those of the metal salts. Application of these biogenic nanoparticles as nanocatalyst can be done in environmental remediation to scavenge the dye from the textile industries and also in the Suzuki coupling reactions for the production of many organic compounds. Fabricated nanoparticles have also shown antibacterial activity against Gram-negative and Gram-positive bacteria. The platinum group metal complexes are used as anticancer drugs, but they leave toxic effects on normal cells. It is interesting that biogenically synthesized palladium and platinum nanoparticles capped and stabilized by phytochemicals are nontoxic. The functionalized nanoparticles can be used as medicine in the treatment of cancer and also as drug carrier. A new protocol may be developed for cancer therapy using palladium and platinum nanoparticles which may be more effective and less toxic than the existing conventional drugs. Their efficacy may be increased by coating them with nontoxic and soluble biopolymers. It is sincerely anticipated that improved version of the platinum group metal nanoparticles will one day replace the conventional drugs for cancer and, also, new nanocatalyst will revolutionize the manufacture of organic compounds.
Authors are thankful to the publishers for the permission to adopt figures in this review.
Authors’ contributions
AH gathered the research data. AH and KSS analyzed these data findings and wrote this review paper. Both authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Authors’ Affiliations
Department of Chemistry, Aligarh Muslim University
Department of Biology, College of Natural and Computational Sciences, University of Gondar
1. Zhang H, Li Q, Lu Y, Sun D, Lin X, Deng X, He N, Zheng S (2005) Biosorption and bioreduction of diamine silver complex by Corynebacterium. J Chem Technol Biotech 80:285–290View ArticleGoogle Scholar
2. Ahmad A, Senapati S, Khan MI, Kumar R, Sastry M (2005) Extra-/intracellular biosynthesis of gold nanoparticles by an alkalotolerant fungus, Trichothecium sp. J Biomed Nanotechnol 1:47–53View ArticleGoogle Scholar
3. Siddiqi KS, Husen A (2016) Fabrication of metal and metal oxide nanoparticles by algae and their toxic effects. Nano Res Lett 11:363View ArticleGoogle Scholar
4. Huang J, Li Q, Sun D, Lu Y, Su Y, Yang X, Wang H, Wang Y, Shao W, He N, Hong J, Chen C (2007) Biosynthesis of silver and gold nanoparticles by novel sundried Cinnamomum camphora leaf. Nanotechnology 18:105104View ArticleGoogle Scholar
5. Nadagouda MN, Varma RS (2008) Green synthesis of silver and palladium nanoparticles at room temperature using coffee and tea extract. Green Chem 10:859–862View ArticleGoogle Scholar
6. Husen A, Siddiqi KS (2014) Phytosynthesis of nanoparticles: concept, controversy and application. Nano Res Lett 9:229View ArticleGoogle Scholar
7. Husen A, Siddiqi KS (2014) Carbon and fullerene nanomaterials in plant system. J Nanobiotechnol 12:16View ArticleGoogle Scholar
8. Husen A, Siddiqi KS (2014) Plants and microbes assisted selenium nanoparticles: characterization and application. J Nanobiotechnol 12:28View ArticleGoogle Scholar
9. Siddiqi KS, Husen A (2016) Fabrication of metal nanoparticles from fungi and metal salts: scope and application. Nano Res Lett 11:98View ArticleGoogle Scholar
10. Siddiqi KS, Husen A (2016) Engineered gold nanoparticles and plant adaptation potential. Nano Res Lett 11:400View ArticleGoogle Scholar
11. Narayanan R, El-Sayed MA (2005) Catalysis with transition metal nanoparticles in colloidal solution: nanoparticle shape dependence and stability. J Phys Chem B 109:12663–12676View ArticleGoogle Scholar
12. Thakkar KN, Mhatre SS, Parikh RY (2010) Biological synthesis of metallic nanoparticles. Nanomedicine 6:257–262Google Scholar
13. Nasrollahzadeh M, Sajadi SM, Maham M (2015) Green synthesis of palladium nanoparticles using Hippophae rhamnoides Linn leaf extract and their catalytic activity for the Suzuki–Miyaura coupling in water. J Mol Catal A Chem 396:297–303View ArticleGoogle Scholar
14. Kumar B, Smita K, Cumbal L, Debut A (2015) Ultrasound agitated phytofabrication of palladium nanoparticles using Andean blackberry leaf and its photocatalytic activity. J Sau Chem Soc 19:574–580View ArticleGoogle Scholar
15. Velmurugan P, Cho M, Lim SS, Seo SK, Myung H, Bang KS, Sivakumar S, Cho KM, Oh BT (2015) Phytosynthesis of silver nanoparticles by Prunus yedoensis leaf extract and their antimicrobial activity. Mater Lett 138:272–275View ArticleGoogle Scholar
16. Momeni S, Nabipour I (2015) A Simple green synthesis of palladium nanoparticles with Sargassum alga and their electrocatalytic activities towards hydrogen peroxide. Appl Biochem Biotechnol 176:1937–1949View ArticleGoogle Scholar
17. Tao F, Grass ME, Zhang Y, Butcher DR, Renzas JR, Liu Z, Chung JY, Mun BS, Salmeron M, Somorjai GA (2008) Reaction-driven restructuring of Rh-Pd and Pt-Pd core-shell nanoparticles. Science 322:932–934View ArticleGoogle Scholar
18. Xu JG, Wilson AR, Rathmell AR, Howe J, Chi MF, Wiley BJ (2011) Synthesis and catalytic properties of Au-Pd nanoflowers. ACS Nano 5:6119–6127View ArticleGoogle Scholar
19. Sun D, Zhang G, Huang J, Wang H, Li Q (2014) Plant-mediated fabrication and surface enhanced Raman property of flower-like Au@Pd nanoparticles. Materials 7:1360–1369View ArticleGoogle Scholar
20. Nasrollahzadeh M, Sajadi SM, Rostami-Vartooni A, Khalaj M (2015) Green synthesis of Pd/Fe3O4 nanoparticles using Euphorbia condylocarpa M. bieb root extract and their catalytic applications as magnetically recoverable and stable recyclable catalysts for thephosphine-free Sonogashira and Suzuki coupling reactions. J Mol Catalysis A: Chem 396:31–39View ArticleGoogle Scholar
21. Nasrollahzadeh M, Sajadi SM, Rostami-Vartooni A, Alizadeh M, Bagherzadeh M (2016) Green synthesis of the Pd nanoparticles supported on reduced grapheme oxide using barberry fruit extract and its application as a recyclable and heterogeneous catalyst for the reduction of nitroarenes. J Colloid Interface Sci 466:360–368View ArticleGoogle Scholar
22. Khan M, Albalawi GH, Shaik MR, Khan M, Adil SF, Kuniyil M, Alkhathlan HZ, Al-Warthan A, Siddiqui MRH (2016) Miswak mediated green synthesized palladium nanoparticles as effective catalysts for the Suzuki coupling reactions in aqueous media. J Sau Chem Soc.
23. Sathishkumar M, Sneha K, Kwak IS, Mao J, Tripathy SJ, Yun YS (2009) Phyto-crystallization of palladium through reduction process using Cinnamon zeylanicum bark extract. J Hazard Mater 171:404–404View ArticleGoogle Scholar
24. Yong P, Rowson NA, Farr JPG, Harris IR, Macaskie LE (2002) Bioreduction and biocrystallization of palladium by desulfovibrio desulfuricans NCIMB 8307. Biotechnol Bioeng 80:369–379View ArticleGoogle Scholar
25. Muthuswamy S, Rupasinghe HPV, Stratton GW (2008) Antimicrobial effect of cinnamon bark extract on Escherichia coli O157:H7, Listeria innocua and fresh-cut apple slices. J Food Saf 28:534–549View ArticleGoogle Scholar
26. Sathishkumar M, Sneha K, Yun YS (2009) Palladium nanocrystals synthesis using Curcuma longa tuber extract. Int J Mater Sci 4:11–17Google Scholar
27. Jia L, Zhang Q, Li Q, Song H (2009) The biosynthesis of palladium nanoparticles by antioxidants in Gardenia jasminoides Ellis: Long lifetime nanocatalysts forp-nitrotoluene hydrogenation. Nanotechnology 20:385601View ArticleGoogle Scholar
28. Zhou T, Fan G, Hong Z, Chai Y, Wu Y (2005) Large-scale isolation and purification of geniposide from the fruit of Gardenia jasminoides Ellis by high-speed counter-current chromatography. J Chromatogr A 1100:76–80View ArticleGoogle Scholar
29. Zhang B, Yang R, Zhao Y, Liu CZ (2008) Separation of chlorogenic acid from honeysuckle crude extracts by macroporous resins. J Chromatogr B 867:253–258View ArticleGoogle Scholar
30. Chen Y, Zhang H, Tian X, Zhao C, Cai L, Liu Y, Jia L, Yin HX, Chen HX (2008) Antioxidant potential of crocins and ethanol exracts of Gardenia jasminoides ELLIS and Crocus sativus L.: a relationship investigation between antioxidant activity and crocin content. Food Chem 109:484–492View ArticleGoogle Scholar
31. Koo HJ, Lim KH, Jung HJ, Park EH (2006) Anti-inflammatory evaluation of gardenia extract, geniposide and genipin. J Ethnopharmacol 103:496–500View ArticleGoogle Scholar
32. Xiang Z, Ning Z (2008) Scavenging and antioxidant properties of compound derived from chlorogenic acid in South-China honeysuckle. LWT-Food Sci Technol 41:1189–1203View ArticleGoogle Scholar
33. Bankar A, Joshi B, Kumar AR, Zinjarde S (2010) Banana peeled extract mediated noval route for the synthesis of palladium nanoparticles. Mater Lett 64:1951–1953View ArticleGoogle Scholar
34. Petla RK, Vivekanandhan S, Misra M, Mohanty AK, Satyanarayana N (2012) Soybean (Glycine max) leaf extract based green synthesis of palladium nanoparticles. J Biomater Nanobiotechnol 3:14–19View ArticleGoogle Scholar
35. Khan M, Khan M, Kuniyil M, Adil SF, Al-Warthan A, Alkhathlan HZ, Tremel W, Tahir MN, Siddiqui MRH (2014) Biogenic synthesis of palladium nanoparticles using Pulicaria glutinosa extract and their catalytic activity towards the Suzuki coupling reaction. Dalton Trans 43:9026View ArticleGoogle Scholar
36. Qiong Z, Jin SC, Xiao FL, Hao TB, Jian HJ (2011) Palladium nanaparticles/chitosan-grafted graphene nanocomposites for construction of a glucose biosensor. Biosens Bioelec 26:3456–3463View ArticleGoogle Scholar
37. Cacchi S, Caponetti E, Casadei MA, Giulio AD, Fabrizi G, Forte G, Goggiamani A, Moreno S, Paolicelli P, Petrucci F, Prastaro A, Saladino ML (2012) Suzuki-Miyaura cross-coupling of arenediazonium salts catalyzed by alginate/gellan-stabilized palladium nanoparticles under aerobic conditions in water. Green Chem 14:317–320View ArticleGoogle Scholar
38. Parker HL, Rylott EL, Hunt AJ, Dodson JR, Taylor AF, Bruce NC, Clark JH (2014) Supported palladium nanoparticles synthesized by living plants as a catalyst for Suzuki-Miyaura reactions. PLoS One 9:e87192View ArticleGoogle Scholar
39. Kora AJ, Rastogi L (2015) Green synthesis of palladium nanoparticles using gum ghatti (Anogeissus latifolia) and its application as an antioxidant and catalyst. Arab J Chem
40. Roopan SM, Bharathi A, Kumar R, Khanna VG, Prabhakarn A (2012) Acaricidal, insecticidal, and larvicidal efficacy of aqueous extract of Annona squamosa L peel as biomaterial for the reduction of palladium salts into nanoparticles. Colloids Surf B 92:209–212View ArticleGoogle Scholar
41. Kalaiselvi A, Roopan SM, Madhumitha G, Ramalingam C, Elango G (2015) Synthesis and characterization of palladium nanoparticles using Catharanthus roseus leaf extract and its application in the photo-catalytic degradation. Spectrochim Acta Part A 135:116–119View ArticleGoogle Scholar
42. Anand K, Tiloke C, Phulukdaree A, Ranjan B, Chuturgoon A, Singh S, Gengan RM (2016) Biosynthesis of palladium nanoparticles by using Moringa oleifera flower extract and their catalytic and biological properties. Photochem Photobiol. doi:
43. Surendra TV, Roopan SM, Arasu MV, Abdullah Al-Dhabi N, Rayalu GM (2016) RSM optimized Moringa oleifera peel extract for green synthesis of M. oleifera capped palladium nanoparticles with antibacterial and hemolytic property. J Photochem Photobiol B Biol 162:550–557View ArticleGoogle Scholar
44. Nasrollahzadeh M, Mohammad Sajadi S (2016) Pd nanoparticles synthesized in situ with the use of Euphorbia granulate leaf extract: Catalytic properties of the resulting particles. J Coll Inter Sci 462:243–251View ArticleGoogle Scholar
45. Manikandan V, Velmurugan P, Park JH, Lovanh N, Seo SK, Jayanthi P, Park YJ, Cho M, Oh BT (2016) Synthesis and antimicrobial activity of palladium nanoparticles from Prunus x yedoensis leaf extract. Mater Lett 185(2016):335–338View ArticleGoogle Scholar
46. Saravanakumar A, Peng MM, Ganesh M, Jayaprakesh J, Mohankumar M, Jang HT (2016) Low-cost and eco-friendly green synthesis of silver nanoparticles using Prunus japonica (Rosaceae) leaf extract and their antibacterial, antioxidant properties. Artif Cells Nanomed Biotechnol,
47. Philip D (2011) Mangifera indica leaf-assisted biosynthesis of well-dispersed silver nanoparticles. Spectro Acta A: Mol Biomol Spectrosc 78:327–331View ArticleGoogle Scholar
48. Karuppiah M, Rajmohan R (2014) Green synthesis of silver nanoparticles using Ixora coccinea leaves extract. Mater Lett 72:367–369Google Scholar
49. Stephens IEL, Bondarenko AS, Grønbjerg U, Rossmeisl J, Chorkendorff I (2012) Understanding the electrocatalysis of oxygen reduction on platinum and its alloys. Energy Environ Sci 5:6744–6762View ArticleGoogle Scholar
50. Ye W, Yu J, Zhou Y, Gao D, Wang D, Wang C, Xue D (2016) Green synthesis of Pt–Au dendrimer-like nanoparticles supported onpolydopamine-functionalized graphene and their high performancetoward 4- nitrophenol reduction. App Cat B: Environm 181:371–378View ArticleGoogle Scholar
51. Nasrollahzadeh M, Sajadi SM (2016) Green synthesis, characterization and catalytic activity of the Pd/TiO2 nanoparticles for the ligand-free Suzuki–Miyaura coupling reaction. J Coll Inter Sci 465:121–127View ArticleGoogle Scholar
52. Sheny DS, Philip D, Mathew J (2013) Synthesis of platinum nanoparticles using dried Anacardium occidentale leaf and its catalytic and thermal applications. Spectrochim Acta A Mol Biomol Spectrosc 114:267–271View ArticleGoogle Scholar
53. Porcel E, Liehn S, Remita H, Usami N, Koayashi K, Furusawa Y, Lesech C, Lacombe S (2010) Platinum nanoparticles: a promising material for future cancer therapy? Nanotechnology 21:085103View ArticleGoogle Scholar
54. Kim EY, Ham SK, Shigenaga MK, Han O (2008) Bioactive dietary polyphenolic compounds reduce nonheme iron transport across human intestinal cell monolayers. J Nutr 138:1647–1651View ArticleGoogle Scholar
55. Soundarrajan C, Sankari A, Dhandapani P, Maruthamuthu S, Ravichandran S, Sozhan G, Palaniswamy N (2012) Rapid biological synthesis of platinum nanoparticles using Ocimum sanctum for water electrolysis applications. Bioprocess Biosyst Eng 35:827–833View ArticleGoogle Scholar
56. Moulton MC, Braydich-Stolle LK, Nadagouda MN, Kunzelman S, Hussain SM, Varma RS (2010) Synthesis, characterization and biocompatibility of “green” synthesized silver nanoparticles using tea polyphenols. Nanoscale 2:763–770View ArticleGoogle Scholar
57. Lu Y, Yuan JY, Polzer F, Drechsler M, Preussner J (2010) In situ growth of catalyticactive AuPt bimetallic nanorods in thermoresponsive core–shell microgels. ACS Nano 4:7078–7086View ArticleGoogle Scholar
58. Coccia F, Tonucci L, Bosco D, Bressan M, d’Alessandro N (2012) One pot synthesis of lignin-stabilized platinum and palladium nanoparticles and their catalytic behaviours in oxidation and reduction reactions. Green Chem 14:1073–1078View ArticleGoogle Scholar
59. Chu C, Su Z (2014) Facile synthesis of AuPt alloy nanoparticles in polyelectrolytemultilayers with enhanced catalytic activity for reduction of 4-nitrophenol. Langmuir 30:15345–15350View ArticleGoogle Scholar
60. Song JY, Kwon EY, Kim BS (2010) Biological synthesis of platinum nanoparticles using Diopyros kaki leaf extract. Bioprocess Biosyst Eng 33:159–164View ArticleGoogle Scholar
61. Long NV, Chien ND, Hayakawa T, Hirata H, Lakshminarayana G, Nogami M (2010) The synthesis and characterization of platinum nanoparticles: a method of controlling the size and morphology. Nanotechnology 21:035605View ArticleGoogle Scholar
62. Thirumurugan A, Aswitha P, Kiruthika C, Nagarajan S, Nancy CA (2016) Green synthesis of platinum nanoparticles using Azadirachta indica – An eco-friendly approach. Mater Lett 170:175–178View ArticleGoogle Scholar
63. Rai A, Singh A, Ahmad A, Sastry M (2006) Role of halide ions and temperature on the morphology of biologically synthesized gold nanotriangles. Langmuir 22:736–741View ArticleGoogle Scholar
64. Yoshihisa Y, Zhao QL, Hassan MA, Wei ZL, Furuichi M, Miyamoto Y, Kondo T, Shimizu T (2011) SOD/catalase mimetic platinum nanoparticles inhibit heat-induced apoptosis in human lymphoma U937 and HH cells. Free Radic Res 45:326–335View ArticleGoogle Scholar
65. Bendale Y, Bendale V, Paul S, Bhattacharyya SS (2012) Green synthesis, characterization and anticancer potential of platinum nanoparticles bioplatin. Chin J Integr Med 10:681–689View ArticleGoogle Scholar
66. Sengupta P, Basu S, Soni S, Pandey A, Roy B, Oh MS, Chin KT, Paraskar AS, Sarangi S, Connor Y, Sabbisetti VS, Kopparam J, Kulkarni A, Muto K, Amarasiriwardena C, Jayawardene I, Lupoli N, Dinulescu DM, Bonventre JV, Mashelkar RA, Sengupta S (2012) Cholesterol-tethered platinum II-based supramolecular nanoparticle increases antitumor efficacy and reduces nephrotoxicity. Proc Natl Acad Sci U S A 109:11294–11299View ArticleGoogle Scholar
67. Hou J, Shang J, Jiao C, Jiang P, Xiao H, Luo L, Liu T (2013) A core crosslinked polymeric micellar platium(IV) prodrug with enhanced anticancer efficiency. Macromol Biosci 13:954–965View ArticleGoogle Scholar
68. Mironava T, Simon M, Rafailovich MH, Rigas B (2013) Platinum folate nanoparticles toxicity: cancer vs. normal cells. Toxicol In Vitro 27:882–889View ArticleGoogle Scholar
69. Wang J, Wang X, Song Y, Zhu C, Wang K, Guo Z (2013) Detecting and delivering platinum anticancer drugs using fluorescent maghemite nanoparticles. Chem Commun (Camb) 49:2786–2788View ArticleGoogle Scholar
70. Min Y, Li J, Liu F, Yeow EK, Xing B (2014) NIR light mediated photoactivation pt based antitumor prodrug and simultaneous cellular apoptosis imaging via upconversion nanoparticles. Angew Chem Int Ed Engl 53:1012–1016View ArticleGoogle Scholar
71. Pandey A, Kulkarni A, Roy B, Goldman A, Sarangi S, Sengupta P, Phipps C, Kopparam J, Oh M, Basu S, Kohandel M, Sengupta S (2014) Sequential application of a cytotoxic nanoparticle and a PI3 K inhibitor enhances antitumor efficacy. Cancer Res 74:675–85View ArticleGoogle Scholar
72. Kostova I (2006) Platinum complexes as anticancer agents. Recent Pat Anticancer Drug Discov 1:1–22View ArticleGoogle Scholar
73. Alshatwi AA, Athinarayanan J, Subbarayan PV (2015) Green synthesis of platinum nanoparticles that induce cell death and G2/M-phase cell cycle arrest in human cervical cancer cells. Mater Sci: Mater Med 26:7Google Scholar
74. Jensen TR, Malinsky MD, Haynes CL, Van Duyne RP (2000) Nanosphere lithography: tunable localized surface plasmon resonance spectra of silver nanoparticles. J Phys Chem B 104:10549–10556View ArticleGoogle Scholar
75. Smitha SL, Nissamudeen KM, Philip D, Gopchandra KG (2008) Studies on surface plasmon resonance and photoluminescence of silver nanoparticles. Spectrochim Acta A Mol Biomol Spectrosc 71:186–190View ArticleGoogle Scholar
76. Farabegoli F, Papi A, Bartolini G, Ostan R, Orlandi M (2010) (−)-Epigallocatechin- 3-gallate downregulates Pg-P and BCRP in a tamoxifen resistant MCF-7 cell line. Phytomedicine 17:356–362View ArticleGoogle Scholar
77. Kalimutho M, Minutolo A, Grelli S, Formosa A, Sancesario G, Valentini A, Federici G, Bernardini S (2011) Satraplatin (JM-216) mediates G2/M cell cycle arrest and potentiates apoptosis via multiple death pathways in colorectal cancer cells thus overcoming platinum chemo-resistance. Cancer Chemother Pharmacol 67:1299–12312View ArticleGoogle Scholar
78. Ahmed K, Wei Z, Zhao Q, Nakajima N, Matsunaga T, Ogasawara M, Kondo T (2010) Role of fatty acid chain length on the induction of apoptosis by newly synthesized catechin derivatives. Chem Biol Interact 185:182–188View ArticleGoogle Scholar
79. Alshatwi AA (2011) Catechin hydrate suppresses MCF-7 proliferation through TP53/Caspase-mediated apoptosis. J Exp Clin Cancer Res 29:167–176View ArticleGoogle Scholar
80. Cowley A, Woodward B (2011) A healthy future: platinum in medical applications. Platin Met Rev 55:98–107View ArticleGoogle Scholar
81. Endo K, Ueno T, Kondo S, Wakisaka N, Murono S, Ito M, Kataoka K, Kato Y, Yoshizaki T (2013) Tumor-targeted chemotherapy with the nanopolymer-based drug NC-6004 for oral squamous cell carcinoma. Cancer Sci 104:369–374View ArticleGoogle Scholar
82. Yang J, Sun X, Mao W, Sui M, Tang J, Shen Y (2012) Conjugate of Pt(IV)-histone deacetylase inhibitor as a prodrug for cancer chemotherapy. Mol Pharm 9:2793–2800View ArticleGoogle Scholar
83. Manikandan M, Hasan N, Wu HF (2013) Platinum nanoparticles for the photothermal treatment of Neuro 2A cancer cells. Biomaterials 34:5833–5842View ArticleGoogle Scholar
84. Nellore J, Pauline C, Amarnath K (2013) Bacopa monnieri phytochemicals mediated synthesis of platinum nanoparticles and its neurorescue effect on 1-methyl 4-phenyl 1, 2, 3, 6 tetrahydropyridine-induced experimental parkinsonism in zebrafish. J Neurodegener Disord 2013:972391Google Scholar
85. Periasamy VS, Alshatwi AA (2013) Tea polyphenols modulate antioxidant redox system on cisplatin-induced reactive oxygen species generation in a human breast cancer cell. Basic Clin Pharmacol Toxicol 112:374–384View ArticleGoogle Scholar
86. Yang X, Li Q, Wang H, Huang J, Lin L, Wang W, Sun D, Su Y, Opiyo JB, Hong L, Wang Y, He N, Jia L (2010) Green synthesis of palladium nanoparticles using broth of Cinnamomum camphora leaf. J Nanopart Res 12:1589–1598View ArticleGoogle Scholar
© The Author(s). 2016
|
Twenty Ideas for Engaging ProjectsSeptember 12, 2011 | Suzie Boss
The start of the school year offers an ideal time to introduce students to project-based learning. By starting with engaging projects, you'll grab their interest while establishing a solid foundation of important skills, such as knowing how to conduct research, engage experts, and collaborate with peers. In honor of Edutopia's 20th anniversary, here are 20 project ideas to get learning off to a good start.
1. Flat Stanley Refresh: Flat Stanley literacy projects are perennial favorites for inspiring students to communicate and connect, often across great distances. Now Flat Stanley has his own apps for iPhone and iPad, along with new online resources. Project founder Dale Hubert is recently retired from the classroom, but he's still generating fresh ideas to bring learning alive in the "flatlands."
2. PBL is No Accident: In West Virginia, project-based learning has been adopted as a statewide strategy for improving teaching and learning. Teachers don't have to look far to find good project ideas. In this CNN story about the state's educational approach, read about a project that grew out of a fender-bender in a school parking lot. When students were asked to come up with a better design for the lot, they applied their understanding of geometry, civics, law, engineering, and public speaking. Find more good ideas in West Virginia's Teach21 project library.
3. Defy Gravity: Give your students a chance to investigate what happens near zero gravity by challenging them to design an experiment for NASA to conduct at its 2.2 second drop tower in Brookpark, Ohio. Separate NASA programs are offered for middle school and high school. Or, propose a project that may land you a seat on the ultimate roller coaster (aka: the "vomit comet"), NASA aircraft that produces periods of micro and hyper gravity ranging from 0 to 2 g's. Proposal deadline is Sept. 21, and flight week takes place in February 2012.
4. Connect Across Disciplines: When students design and build kinetic sculptures, they expand their understanding of art, history, engineering, language arts, and technology. Get some interdisciplinary project insights from the Edutopia video, Kinetic Conundrum. Click on the accompanying links for more tips about how you can do it, too.
5. Honor Home Languages: English language learners can feel pressured to master English fast, with class time spent correcting errors instead of using language in meaningful ways. Digital IS, a site published by the National Writing Project, shares plans for three projects that take time to honor students' home languages and cultures, engaging them in critical thinking, collaboration, and use of digital tools. Anne Herrington and Charlie Moran curate the project collection, "English Language Learners, Digital Tools, and Authentic Audiences."
6. Rethink Lunch: Make lunch into a learning opportunity with a project that gets students thinking more critically about their mid-day meal. Center for Ecoliteracy offers materials to help you start, including informative including informative essays and downloadable planning guides. Get more ideas from this video about a middle-school nutrition project, "A Healthy School Lunch."
7. Take a Learning Expedition: Expeditionary Learning schools take students on authentic learning expeditions, often in neighborhoods close to home. Check out the gallery for project ideas about everything from the tools people use in their work to memories of the Civil Rights Movement.
8. Find a Pal: If PBL is new to you, consider joining an existing project. You'll benefit from a veteran colleague's insights, and your students will get a chance to collaborate with classmates from other communities or even other countries. Get connected at ePals, a global learning community for educators from more than 200 countries.
9. Get Minds Inquiring: What's under foot? What are things made of? Science projects that emphasize inquiry help students make sense of their world and build a solid foundation for future understanding. The Inquiry Project supports teachers in third to fifth grades as they guide students in hands-on investigations about matter. Students develop the habits of scientists as they make observations, offer predictions, and gather evidence. Companion videos show how scientists use the same methods to explore the world. Connect inquiry activities to longer-term projects, such as creating a classroom museum that showcases students' investigations.
10. Learn through Service: When cases of the West Nile virus were reported in their area, Minnesota students sprang into action with a project that focused on preventing the disease through public education. Their project demonstrates what can happen when service-learning principles are built into PBL. Find more ideas for service-learning projects from the National Youth Leadership Council.
11. Locate Experts: When students are learning through authentic projects, they often need to connect with experts from the world outside the classroom. Find the knowledgeable experts you need for STEM projects through the National Lab Network. It's an online network where K-12 educators can locate experts from the fields of science, technology, engineering and mathematics.
12. Build Empathy: Projects that help students see the world from another person's perspective build empathy along with academic outcomes. The Edutopia video, "Give Me Shelter", shows what compassionate learning looks like in action. Click on the companion links for more suggestions about how you can do it, too.
13. Investigate Climate Science: Take students on an investigation of climate science by joining the newest collaborative project hosted by GLOBE, Global Learning and Observations to Benefit the Environment. The Student Climate Research Campaign includes three components: introductory activities to build a foundation of understanding, intensive observing periods when students around the world gather and report data, and research investigations that students design and conduct. Climate project kicks off Sept. 12.
14. Problem-Solvers Unite: Math fairs take mathematics out of the classroom and into the community, where everyone gets a chance to try their hand at problem solving. Galileo Educational Network explains how to host a math fair. In a nutshell, students set up displays of their math problems but not the solutions. Then they entice their parents and invited guests to work on solutions. Make the event even more engaging by inviting mathematicians to respond to students' problems.
15. Harvest Pennies : Can small things really add up to big results? It seems so, based on results of the Penny Harvest. Since the project started in New York in 1991, young philanthropists nationwide have raised and donated more than $8 million to charitable causes, all through penny drives. The project website explains how to organize students in philanthropy roundtables to study community issues and decide which causes they want to support.
16. Gather Stories: Instead of teaching history from textbooks, put students in the role of historian and help them make sense of the past. Learn more about how to plan oral history projects in the Edutopia story, "Living Legends." Teach students about the value of listening by having them gather stories for StoryCorps.
17. Angry Bird Physics: Here's a driving question to kickstart a science project: "What are the laws of physics in Angry Birds world?" Read how physics teachers like Frank Noschese and John Burk are using the web version of the popular mobile game in their classrooms.
18. Place-Based Projects: Make local heritage, landscapes, and culture the jumping-off point for compelling projects. That's the idea behind place-based education, which encourages students to look closely at their communities. Often, they wind up making significant contributions to their communities, as seen in the City of Stories project.
19. News They Can Use: Students don't have to wait until they're grown-ups to start publishing. Student newspapers, radio stations, and other journalism projects give them real-life experiences now. Award-winning journalism teacher Esther Wojcicki outlines the benefits this post on the New York Times Learning Network. Get more ideas about digital-age citizen journalism projects at MediaShift Idea Lab.
20. The Heroes They Know: To get acquainted with students at the start of the year and also introduce students to PBL processes, High Tech High teacher Diana Sanchez asked students to create a visual and textual representation of a hero in their own life. Their black-and-white exhibits were a source of pride to students, as Sanchez explains in her project reflection . Get more ideas from the project gallery at High Tech High, a network of 11 schools in San Diego County that emphasize PBL. To learn more, watch this Edutopia video interview with High Tech High founding principal Larry Rosenstock.
Please tell us about the projects you are planning for this school year. Questions about PBL? Draw on the wisdom of your colleagues by starting discussions or asking for help in the PBL community.
|
#ifndef __GAME_RAIL_H_
#define __GAME_RAIL_H_
#include "eoavov.h"
/* This is entity for creating paths
tag - entity tag
fromtag - previous rail entity tag
totag - next rail entity tag
revert - if set to 1, then the entity moving along this route will be sent back after reaching this entity, also it marks entity route as reverted
arrivaltime - time(in millis) moving from this rail to next rail takes
/newent rail tag fromtag totag revert arrivaltime
It works like that:
railent1 --------- railent2 ---------- railent3
entity1^
-after setting entity1 route entity1 will move from railent1 to railent2, from railent2 to railent3
-moving between the rails entity1 will use the shortest path
*/
struct rail;
namespace game{
#define validrail(a) (a->arrivetime>0&&(a->revert?a->prev:a->next))
extern vector<rail*>rails;
void clearroutes();
};
struct rail{
vec o;
rail*prev,*next;
int arrivetime,tag;
bool revert;
rail(extentity&e);
~rail(){}
};
struct routeManager{
bool revert;
int timestamp;
vec dir;
rail*cur,*next;
routeManager():cur(NULL),next(NULL){}
void set(rail*c,bool r=false);
bool end();
bool finished(const vec o);
vec move();
};
#endif
|
Tuesday, June 10, 2014
Comrade Abraham: Was President Lincoln a Closet Marxist?
by Nomad
Abe Lincoln Labor
When we think of Lincoln, most of us do not consider the sixteenth president as a Marxist revolutionary. Yet, a little research uncovers some very interesting- slightly confounding- connections between Abe Lincoln and the father of the Communist movement. As fascinating as that might be, there is an even bigger shock in store when it comes to the origins of the Republican Party.
This quote in the meme above reportedly came early in his political career (December 1847). For some of us who grew up thinking of "Honest Abe" as a folksy backwoods lawyer, it's a bit jarring to hear him talking about labor issues. It's not the image many of us have of the man who freed the slaves and held the nation together. (It's hard enough to think of him as a Republican.)
But there was more to that quote. Lincoln also wrote in that same passage:
It's impossible to imagine that any president would dare to say such a thing today. And especially not a Republican one. Any conservative politician who expressed such thoughts today could expect to be skewered alive and roasted slowly (with relish) live on Fox News.
The Battle of the Quotations
As we are all well aware, politicians tend to talk more than necessary and in doing so, say a lot of nonsense, especially early on in their careers. However, in Lincoln's case there is more to it than that. We do know that from early in his career, Lincoln's ideas had not changed but actually expanded.
While speaking at the U.S. Sanitary Commission Fair in Baltimore on April 18, 1864, Lincoln pointed out that liberty can mean different things to different people. One man's definition of liberty can easily be another man's definition of tyranny and oppression.
"The world has never had a good definition of the word liberty, and the American people, just now, are much in want of one. We all declare for liberty; but in using the same word we do not all mean the same thing. With some the word liberty may mean for each man to do as he pleases with himself, and the product of his labor; while with others the same word may mean for some men to do as they please with other men, and the product of other men's labor. Here are two, not only different, but incompatable [sic] things, called by the same name – liberty. And it follows that each of the things is, by the respective parties, called by two difference and incompatable [sic] names – liberty and tyranny."
The key phrase here is "the product of other men's labor." It was the same phrase he used in the above quote.
It's hard not to see in that phrase a some degree of Marxist thought. Compare it to what the Communist Manifesto has to say:
Whatever his personal views on private wealthy might have been, there is another quote by Lincoln we should consider. It comes from his Douglas debates and suggests that Lincoln believed that all men had a right to fruits of their labor. He said:
"That men who are industrious, and sober, and honest in the pursuit of their own interests should after a while accumulate capital, and after that should be allowed to enjoy it in peace, to use it to save themselves from actual labor and hire other people to labor for them is right."
This is a line that is often cited by conservatives. However, the message was not quite as absolute as they suggest. They tend to neglect the lines that followed:
In doing so they do not wrong the man they employ for they are benefited by working for others, hired laborers receiving their capital for it. Thus a few men that own capital, hire a few others, and thus establish the relationship of capital and labor rightfully. A relation of which I make no complaint. But I insist that that relation after all does not embrace more than one-eighth of the labor of the country.
Lincoln does not appear to be referring to corporations hiring thousands workers, but to small businesses run by "industrious, sober and honest" employers. A "rightful" relationship between labor and capital was Lincoln's somewhat far-fetched dream.
With its sweatshops and child labor, its unsafe conditions and unsustainable wages, the capitalist system that emerged in the decades after his murder, was largely created by a collaboration with the Republican party of the 1870s and 80s. Still worse, that system took on a form much more like slavery and peonage and less like the harmonious relationship Lincoln supported.
It was called, even at that time, "wage slavery."
Socialist Horace Greeley,
founder of the Republican Party
The Tribune Connection
Still, this doesn't answer the main question of whether Marx had any influence in Lincoln's positions, before or during his presidency. If so, then how much influence?
In fact, apart from general similarity in ideologies and thoughts of the day, there was a more concrete connection between Karl Marx and Abraham Lincoln.
It came in the person of Charles Dana
In 1849 Dana was appointed managing editor of the Tribune, the most consistently influential of nineteenth-century American newspapers, owned by Horace Greeley.
First a little more about Greeley.
In 1854 in a small town in Wisconsin, abolitionist Greeley became one of the founders of the new Republican Party. According to some historians, Greeley even helped to secure the presidential nomination for Abraham Lincoln in 1860.
Combining business with politics. Greely made the Tribune the Republican party's unofficial national organ. The weekly nationwide edition of the paper promoted a wide variety of interests and causes.
Greeley and the Tribune spoke out in opposition to such things as government support of the railroads, the massive accumulation of wealth in the hands of a few, monopolies and land speculators. (Exactly the kinds of things the Republicans today would support and close down the US government over.)
As the Encyclopedia Britannica notes, Greeley urged a number of educational reforms, especially free common-school education for all; he championed producers’ cooperatives- meaning unions- (but opposed woman suffrage.)
It was Greeley's position of the abolition of slavery that dominated his politics. And his newspaper served up the kind of news that the radical activist abolitionists in the North wanted to hear. Constant (and mostly-true) tales of slave owners abuse consolidated and hardened public opinion in the North against the plantations of the cotton-producing South.
In fact, the Tribune's extreme position on this subject made it a target for the paper's only rival, The Herald. The editors of the Herald were able to stoke working-class resentment and the prejudice and fears of the immigrants of the North. Elevating the status of blacks would come at the expense of white immigrant labor, the editors of the Herald suggested. They maintained that Greeley was attempting to "assert the equality of white men and the Negroes."
Playing the low-wage immigrants against the no-wage slaves proved quite effective.
And if that ploy was not offensive enough, the writers of the Herald also claimed that Greeley was promoting the ideas of free-love-ism, "and all men should have property in common, the family should be abolished and that all women should be common prostitutes."
These accusations are practically a check-list of the demands found in the Communist Manifesto.
Radical Republicans of the Left
The attorney Clarence Darrow once called the Greeley's New York Tribune, “the political and social Bible” of every reforming, radical and Republican household. Among other contributors employed by the newspaper were Margaret Fuller, the first major feminist), Henry James Sr. and Albert Brisbane, both social utopians.
(To cite one example of the kind of ideas that these writers advocated: Brisbane was instrumental is promoting the revolutionary utopian ideas of François Marie Charles Fourier. Fourier was responsible of the creation of "communes" based on on a new world order which included equal rights for women, acceptance of homosexuality and worker rights.)
Whether the Republican Party today cares to admit it or not, the father of the party, Greeley was a proud champion of radical Socialism. (If there is any doubt, then the 1892 book, Horace Greeley and Other Pioneers of American Socialism should put all doubts to rest. )
The Tribune produced a form of advocacy journalism of Far Left politics. Ironic, when you think of all the times that Fox News has condemned the largely-fictional liberal bias of the news media.
History, as we have seen in our own age, is not something the Republican party does well. There are plenty of good reasons for that.
It's hard to maintain conservative pro-capitalist credentials when the far-right Republican Party itself was founded by a group of Socialists that promoted a Socialist agenda with all manner of radical rhetoric, through a Socialist newspaper.
If you are feeling dizzy, just relax: it's quite normal. Your world has been turned upside down.
Charles Dana
Dana's Patronage
By any measure, Charles Dana was a gifted man and he was also a forgotten behind-the-scenes player in American history.
As biographical notes for Dana points out:
But as we have seen, Dana's editorial agenda at the Tribune was by no means limited to the anti-slavery movement. Greeley's biographer, Jeter Allen Isely mentions Dana and his European connections:
"Immediately upon joining the Tribune, Dana went abroad to cover the 1848 revolutions in Europe, where he came under the influence of the socialist Pierre Joseph Proudhon, and where he met Karl Marx, whom he subsequently engaged as a London correspondent for the Tribune.
For 13 years, Dana and Greeley worked together, and for ten years of that time, Marx as a foreign correspondent in England. Since 1848, Marx had captured the public's attention for his contribution to The Communist Manifesto. That book introduced the idea that capitalist societies would eventually be replaced by socialism, and then eventually communism.
Like the Tribune throughout the 1850s and during the Civil War, Marx felt that that the defeat of slavery would result in a golden age for labor.
As a correspondent for the Tribune in England, Dana's arrangement with Karl Marx provided him with much-needed income. At that desperate time in Marx's career, his friend and collaborator Friedrich Engels could only provide limited financial support.
History records that Engels came up with a clever solution. He would write and submit articles for publication and Marx would receive the credited by-line. The arrangement lasted ten years, with the final Marx column being published in February 1862.
(The relationship is generally only footnoted and passed along without much comment. One source says that Marx worked "briefly" for the Tribune. However, a full decade is hardly a brief period. Here is a list of of articles that Marx wrote for the Tribune.)
In any case, it comes as a shock to learn that the Tribune- under Dana and Greeley- became both a voice of the emergent Communist movement- straight from the words of its leaders- as well as a supporter for the newly hatched Republican party.
The Rise and Fall and Rebirth Of Dana
During these years, Dana, wrote Greeley biographer Don C. Seitz, had become "dominant in the Tribune office and made much trouble for his superior, altering and suppressing his articles in accordance with his own journalistic judgment, which was generally good."
However, not everybody agreed with that assessment. Dana's critics at a rival newspaper had this to say:
"The news value of the Tribune during the spring and early summer of 1861 was very slight. Its whole tone and tenor were hysterical. It shrieked, it threatened, it scolded, and it denounced. Its columns were filled with rumors and counter-rumors; and with advice, mostly useless, to the government, to merchants, the public, to labor, and the farmers. It bragged and it boasted. It sniveled and sneered. It demanded action!
On the other hand, Greeley was cantankerous and overbearing. At around that time, he even publicly castigated the president for his lack of leadership in an August 1862 op-ed piece called "The Prayer of Twenty Millions." )
After repeated clashes of egos between Dana and Greeley, the board of managers of the Tribune, led by Greeley, asked for Dana's resignation in April, 1862.
Coincidentally, in the year of dismissal, Dana immediately- in fact, within days- found work as a trusted member of Lincoln's staff. Later Dana gave this account of the events:
My retirement from the Tribune was talked of in the newspapers for a day or two, and brought me a letter form the Secretary of War, Edwin M. Stanton, saying he would like to employ me in the War Department. I had already met Mr. Lincoln, and had carried on a brief correspondence with Mr. Stanton.
Within a short period, he was selected to be the assistant Secretary of War and remained in that spot from 1863 to 1865. In fact, the Secretary of War, Edwin Stanton, gave him his highest trust by making Dana a special investigating agent of the War Department, touring the military camps and reporting back what he found.
This was not merely an act of Providence. It came about as a result of carefully calculated planning by Dana.
According to Dana's biographer,
Since Lincoln's inauguration in 1861, Dana had worked at establishing close ties with key members of the president's cabinet.
Thus, Charles Dana, the former boss of Karl Marx (and indirectly of Friedrich Engels) became, as journalist and historian Alexander K. McClure described, "the few men who enjoyed Lincoln's complete confidence."
To suspicious modern eyes, Dana, the long-time friend of the Marxists, had successfully infiltrated the White House. That is one way to view the events but how much influence Dana might or might not have had is debatable.
Marx Karl
Karl Marx abt 1861
Salute to the Single-Minded Son
Of course, Marx and Lincoln never met and discussed the future of capitalism in the United States.
However, Karl Marx did send a congratulatory letter to Lincoln upon his re-election on November 22, 1864.
In a letter on behalf of the International Workingmen's Association, Marx connected the liberation of the slave class to the eventual liberation of all laborers, whether they be enslaved or hired.
We know also that Marx's letter was passed along to the president. The reply by Charles Francis Adams, who acted as a minister in charge of the president's correspondence, was carefully worded.
The Government of the United States, Adams wrote, "strives to do equal and exact justice to all states and to all men and it relies upon the beneficial results of that effort for support at home and for respect and good will throughout the world."
As we know that dream of the emancipation of the working class, with a fair and balanced relationship between labor and capital, died on the night of April 15, 1865, some six months after Marx's letter.
|
How Do Parents Pay for College?
MoneyIllustration by Barry Falls
Now that my son is a college freshman, I see everything in terms of tuition payments. I know that our monthly mortgage is less than our monthly college bill. That a certain freelance assignment will or will not cover a month at school. That the anemic levels of our 529 accounts have rebounded enough to pay for a few months more.
When I was in college, tuition reached $10,000, a number so shocking that we ran it as the headline across the front page of the school newspaper. Now the numbers are several times that, outpacing inflation. They almost don’t seem real.
I got an e-mail message from a reader asking how the average, hard-working, middle-class parent can possibly “make” these numbers with only an 18-year head start.
I passed the question along to Jean Chatzky, a financial journalist and the author of “Money 911: Your Most Pressing Money Questions Answered, Your Money Emergencies Solved.” The reader’s original question and Jean’s answer are below. Please use the comments to add your own questions and advice about how parents can manage this.
My husband and I both grew up in high-middle-income families (whatever that means) and our parents made it work and paid for our undergraduate education. They saved well and spent carefully. My sons are 2 ½ yrs old and 9 months and we are barely making ends meet with our mortgage payment, car payment, daycare costs, etc. We have no “extras” left to cut out! I’m a social worker and my husband is a research professor; we’re educated professionals who make, what I think is a normal amount of money. How are people doing this?
The answer is, they’re not. At least not in the way our parents did. I, like you, grew up in a middle-class family. For most of my childhood, my father was a college professor and my mother substitute-taught. They paid my college tuition by subdividing the property our house stood on and selling half the lot (for which I’ve always been grateful).
But so much has changed since then — most importantly tuition prices, which in recent decades have been going up at two to three times the rate of inflation. That’s the reason that two-thirds of students receive at least some financial aid today.
So, here’s my suggestion: Don’t aim to pay the entire bill. If you can come up with a third of the money your kids need for college before they go, you are doing a good job. Think about paying another third out of your then-cash flow. And have them borrow the final third.
How do you do that in your current situation?
First, make sure that you are saving and investing for your own retirement. I assume you’re contributing to your workplace retirement plans — that is a must!
Then work off two things, salary increases and windfalls. Open a 529 college-savings account (go to to find the best one for you).
Then, the next time you or your spouse gets a raise, figure out how much your take-home pay will increase and schedule automatic transfers to that 529 in that amount the day you get paid. Move windfalls to your 529 as well. If you get a tax refund, deposit that money. (Then change your withholding to boost your take-home pay and increase your automatic contributions so that you’re no longer giving Uncle Sam an interest-free loan.) Birthday and holiday checks can go into the account as well.
As soon as you see you’re making progress, you’ll start feeling — as a parent — like you’re doing a better job.
|
/***************************************************************************
OgreIPLVirtualPageWindow.h
This program is free software; you can redistribute it and/or modify it under
the terms of the GNU Lesser General Public License as published by the Free Software
Foundation; either version 2 of the License, or (at your option) any later
version.
This program is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with
this program; if not, write to the Free Software Foundation, Inc., 59 Temple
Place - Suite 330, Boston, MA 02111-1307, USA, or go to
http://www.gnu.org/copyleft/lesser.txt.
***************************************************************************/
#ifndef __IPLVIRTUALPAGEWINDOW_H__
#define __IPLVIRTUALPAGEWINDOW_H__
#include "OgreIPLPage.h"
#include <OgreSceneNode.h>
namespace Ogre
{
class IPLSceneManager;
/** This is a virtual window page manager for organizing Pages into a visible landscape.
*/
class IPLVirtualPageWindow
{
public:
IPLVirtualPageWindow( IPLSceneManager* sm );
virtual ~IPLVirtualPageWindow( );
/** setup page windows by initializing both page vector containers
to the required size and fill with pages in active window and NULL
in old window
@param xcount number of pages in window on x axis
@param zcount number of pages in window on z axis
*/
void setPageWindowSize( const int xcount, const int zcount );
/** update the window view based on the new position
@remarks
the window manager will shift the window view if
the view position is outside the view boundaries
if the view position is still in the view boundaries then nothing is done
*/
void updateViewPosition( const float x, const float z );
/** change the parameters that define how big the world map is
@param PageXCount number of pages that are along the X axis in the Map
@param PageZCount number of pages that are along the Z axis in the Map
@param PagePixelSize is the width and height in pixel of the height map data that defines the page
*/
void setWorldMapSize( const int PageXCount, const int PageZCount, const int PagePixelSize );
/** change the wold scale - values must be > 0
*/
void setWorldScale( const float x, const float z );
/** get the height of the terrain at the world coordinates x,z
@remarks
if the world coordinates are within the virtual page window then the actual world height
is returned. If outside the virtual window then -1 is returned
@param worldx real world position on x axis
@param worldz real world position on z axis
*/
float getWorldHeight( float worldx, float worldz );
IPLSceneManager* getSceneManager( void ) const;
SceneNode* getRootSceneNode( void ) const;
private:
void _preLoad( void );
/** swap ActiveWindow and OldWindow. Normally done after the pages have been shuffled in the window.
*/
void _switchWindow( void ); // TODO: Not implemented
/** shift the pages in the active window into the Old window
after the shift each page is informed of its new map index and new position for the scene node
Swap Active window with old window so the shuffled windows become the active view
@param ShiftX number of pages to shift on the x axis
@param SHiftY number of pages to shift on the z axis
*/
void _shufflePages( const int ShiftX, const int ShiftY );
/** notify pages in window of what map indexes and scene node position they should have
this method would be called after calling shufflePages
It is up to the pages to decide if they need to unload and reload. Moving the scene node is upto them.
*/
void _updatePageMapIndexes( void ); // TODO: Not implemented
/** checks if the world coordinates are within the window center region
returns true if the view is within the boundaries
@param worldx position on x axis in the world
@param worldz position on z axis in the world
*/
bool _isViewCentered( const float worldx, const float worldz ) const ;
/** using the world coordinates as the new center view position calculate
min x,max x, min v and max v that define the center boundaries. These boundaries are
used by isVeiwCentered to determine if the view position is still within
the center boundaries
@param x world position on world x axis
@param z world position on world y axis
*/
void _setCenterAreaBoundries( const float worldx, const float worldz );
/** convert world coordinates into page map index
assumes the map repeats in x and z direction in the world
works for negative coordinates also
normally used to find the center page in the window
@param x world coordinate on x axis
@param z world coordinate on z axis
@param PageX reference to page index on x axis that will receive converted value
@param PageZ reference to page index on z axis that will receive converted value
*/
void _getPageMapIdxFromWorldPosition( float x, float z, int & PageX, int & PageZ );
/** get the Map Page offsets using absolute page indexes
@remarks the method assumes that the map is repeating in both x and z coordinates and that the map
has a size equal or greater to one on both axis
@param AbsolutePage_x page index on x axis
@param AbsolutePage_z page index on z axis
@param MapPageOffestX reference that will receive the page offset in relation to the map on the x axis
@param MapPageOffestZ reference that will receive the page offset in relation to the map on the z axis
*/
void _getMapPageOffset( const int AbsolutePage_x, const int AbsolutePage_z, int & MapPageOffsetX, int & MapPageOffsetZ );
/** Get the Page absolute indices from a world position vector
@remarks
Method is used to find the Page indices using a world position.
Beats having to iterate through the Page list to find a page at a particular
position in the world.
The page indices returned are not clamped to the map page count.
@param worldx position on x axis
@param worldz position on z axis
@param x result placed in reference to the x index of the page
@param z result placed in reference to the z index of the page
*/
void _getWorldPageIndices( const float worldx, const float worldz, int& x, int& z );
float mWorldScale_x;
float mWorldScale_z;
int mPagePixelSize;
float mBoundryExtent;
int mWindowPageCount_x; // number of pages that make up the window along the x axis
int mWindowPageCount_z; // number of pages that make up the window along the z axis
int mWorldMap_PageCount_x; // number of pages that make up the map along x axis
int mWorldMap_PageCount_z; // number of pages that make up the map along z axis
int mAbsolutePageOffset_x; // position of window in absolute page index from origin on x axis
int mAbsolutePageOffset_z; // position of window in absolute page index from origin on z axis
int mMapPageOffset_x; // relative Map page index for bottom left corner of Window on x axis
int mMapPageOffset_z; // relative Map page index for bottom left corner of Window on z axis
// center area boundary
float mBoundryMinX;
float mBoundryMinZ;
float mBoundryMaxX;
float mBoundryMaxZ;
IPLSceneManager* mSceneManager;
SceneNode* mRootSceneNode;
typedef std::vector < IPLPage * > IPLPageRow;
typedef std::vector < IPLPageRow > IPLPages;
IPLPages mActiveWindow;
IPLPages mOldWindow;
};
}
#endif
|
#include "../streamer/streamer.hpp"
#include "../streamer/optional.hpp"
#include <cassert>
#include <vector>
/*
* or_get(UnaryFunc) operates on a std::optional and returns either the value the
* std::optional contains, or calls the UnaryFunc passed to or_default if the std::optional is
* empty, and returns the UnaryFunc's result.
*/
void example_or_get() {
using namespace streamer;
std::vector<int> input = {15, 23, 4};
std::vector<int> empty_input = {};
int count = 0;
int count2 = 0;
int result = input
>> first
>> or_get([&count]() { count++; return 999; });
int result2 = input
% first
% or_get([&count]() { count++; return 999; });
int result3 = empty_input
| first()
| or_get([&count2]() { count2++; return 999; });
assert(result == 15);
assert(result2 == 15);
assert(result3 == 999);
assert(count == 0);
assert(count2 == 1);
}
|
U.S. Energy Information Administration - EIA - Independent Statistics and Analysis
Today in Energy
The Strait of Hormuz (shown in the oval on the map), which is located between Oman and Iran, connects the Persian Gulf with the Gulf of Oman and the Arabian Sea. Hormuz is the world's most important oil chokepoint due to its daily oil flow of almost 17 million barrels per day (bbl/d) in 2011, up from between 15.5-16.0 million bbl/d in 2009-2010. Flows through the Strait in 2011 were roughly 35% of all seaborne traded oil, or almost 20% of oil traded worldwide.
On average, 14 crude oil tankers per day passed through the Strait in 2011, with a corresponding amount of empty tankers entering to pick up new cargos. More than 85% of these crude oil exports went to Asian markets, with Japan, India, South Korea, and China representing the largest destinations.
At its narrowest point, the Strait is 21 miles wide, but the width of the shipping lane in either direction is only two miles, separated by a two-mile buffer zone. The Strait is deep and wide enough to handle the world's largest crude oil tankers, with about two-thirds of oil shipments carried by tankers in excess of 150,000 deadweight tons.
Several alternatives are potentially available to move oil from the Persian Gulf region without transiting Hormuz, but they are limited in capacity, in many cases are not currently operating or operable, and generally engender higher transport costs and logistical challenges.
- Alternate routes include the 745-mile Petroline, also known as the East-West Pipeline, across Saudi Arabia from Abqaiq to the Red Sea. The East-West Pipeline has a nameplate capacity of about 5 million bbl/d, with current movements estimated at about 2 million bbl/d.
- The Abqaiq-Yanbu natural gas liquids pipeline, which runs parallel to the Petroline to the Red Sea, has a 290,000-bbl/d capacity.
- Additional oil could also be pumped north via the Iraq-Turkey pipeline to the port of Ceyhan on the Mediterranean Sea, but volumes have been limited by the closure of the Strategic Pipeline linking north and south Iraq.
- The United Arab Emirates is also completing the 1.5 million bbl/d Abu Dhabi Crude Oil Pipeline that will cross the emirate of Abu Dhabi and end at the port of Fujairah just south of the Strait.
- Other alternate routes could include the deactivated 1.65-million bbl/d Iraqi Pipeline across Saudi Arabia (IPSA) and the deactivated 0.5 million-bbl/d Tapline to Lebanon.
EIA's World Oil Transit Chokepoints analysis brief contains additional information about other chokepoints, and the Middle East & North Africa overview contains additional information about countries in the region.
|
Tag Archives: prologues
An Interview with Alexandra Burt
9 Mar
Alexandra Burt is the author of the bestselling Remember Mia. Her new novel is The Good Daughter.
Alexandra Burt is the author of the novels Remember Mia and The Good Daughter. She was born in Fulda, Germany, a baroque town in the East Hesse Highlands. Days after her college graduation she boarded a flight to the U.S. She ended up in Texas, married, and explored a career in the student loan industry. After the birth of her daughter she became a freelance translator, determined to acknowledge the voice in the back of her head prompting her to break into literary translations. The union never panned out and she decided to tell her own stories. She currently lives in Central Texas with her husband, her daughter, and two Chocolate Labrador Retrievers. One day she wants to live on a farm and offer old arthritic dogs a comfy couch to live out their lives. She wouldn’t mind a few rescue goats, chickens, and cats. The more the merrier. She is a member of Sisters In Crime, a nationwide network of women crime writers.
To read an excerpt from Burt’s new novel The Good Daughter and an exercise on moving between exterior action and interiority, click here.
In this interview, Burt discusses prologues, shifting between time periods in a novel, and the lure and importance of setting.
Michael Noll
I really admire the prologue of The Good Daughter, which does the work that so many prologues do: setting up situation, creating suspense. But it also spends time in Dahlia’s head, building her as a character, which can be difficult to do when you’re focused on hooking readers with story. How did you approach this prologue? Was it written early or late in the process?
Alexandra Burt
Prologues shouldn’t be too elusive, after all we don’t care about the characters, haven’t even met them yet. You can reveal character and move the plot along at the same time, like an opening scene in a movie. In The Good Daughter I wanted to create suspense and arouse curiosity regarding plot as well as characters.
The prologue was written early on as a vignette, it was the moment two characters meet; Dahlia as a child doing what she spent the better part of her life doing, going from place to place without really belonging, wondering what’s in store in the next state, the next city. It is a crossroads of sorts for the main character, a metaphor for her life and the beginning of putting down roots in Aurora, Texas. She has an encyclopedia in her lap and if she can’t figure where she’s going, she can at least look up the meanings of words she encounters along her journey. So in a way she does what she’s going to do for the entire novel: figuring out the meaning of her memories, her mother’s stories. The prologue is also chockfull of symbols: the first few pages of the encyclopedia are missing, the number seven (the seeker of truth), Red Vines turning her lips crimson. I play with symbolism a lot, sometimes on purpose, sometimes it’s just the way my scrabble ends up on the page. It is also very concrete in being a scene at a diner, a suspicious meeting by the side of the road. A prologue can do many things, like the opening scene of a movie.
Michael Noll
The novel moves back and forth between Dahlia’s present and past. Moves like this can be a risk in that readers become so engaged in one story line and moment that the shift in time feels like an interruption. That isn’t the case here. Did you move back and forth as you wrote, or did you focus on one and then the other before breaking them into pieces?
Alexandra Burt
Alexandra Burt’s novel The Good Daughter tells the story of a woman uncovering secrets from her childhood that some people don’t want her to answer.
I immensely enjoy novels that move back and forth between present and past—The Weight of Water by Anta Shreve comes to mind—but moving back and forth can be a tricky structure, I agree. Advantages of a dual timeline are a deeper plot and theme and greater character development. Disadvantages are that readers lose interest or get confused and frustrated. One can lose a reader at the drop of a dime unless both storylines are equally captivating.
The characters in The Good Daughter fed off each other and I jumped back and forth as I wrote. I had a plot in mind but I allowed the present and past to feed off each other. There was a tangible connection that I explored as I went along—the past had never died, its symbol the farmhouse that stood untouched for decades. I had to pay close attention to the transitions and really connect the two plots toward the end of the story. In general, there should be a strong relationship between the two plots, geographically, symbolically, or otherwise, and both stories must be strong in their own right.
Michael Noll
The novel is a mystery, but it’s also in many ways a quiet novel about a particular place. I’m curious which of these elements—the mystery or the sense of place—first drew you to these characters and story?
Alexandra Burt
It began as a mystery in a Texas setting: a body in the woods, an olfactory disorder, and a possible serial killer. The original title was Scent of a Crime. At some point I realized that I wanted to add another layer to the novel; I may have constructed a plot-driven mystery but something was amiss. I wanted the setting to be a character in itself and in many ways the story required a kind of Texas that was deeper than tacos and football and rodeos—forgive me for stereotyping—a Texas that could seep into the reader’s pores. I imagined a small town forgotten by time but also a place where secrets don’t die, where buildings sit untouched for decades, where the ghosts of the past remain. Once Aurora came alive, the story changed from plot-driven to a more character-driven novel. There is history wherever you go all over this country, some well-known and documented, but there need not be a historical marker or tourist attraction in order to tell a story about the place and the people. Aurora, though fictional, was such a place; once I imagined it, there was no going back and it took on a life of its own.
Michael Noll
You’re a member of Sisters in Crime, the national network of women crime writers–and I know there’s an active group here in Austin. A lot of writers are familiar with MFA programs and don’t necessarily know about groups like Sisters in Crime. What role has the group played in your development as a writer?
Alexandra Burt
I live about an hour north of Austin and I can’t participate in meetings as much as I want to, unfortunately. As a writer—and writing is a solitary profession—we need to belong and network and support each other. There still is a gender bias when it comes to women writing crime, even though women seem to dominate the headlines ever since Gone Girl hit he shelves. But the numbers speak to a deeper truth: only one third of published authors across all genres are women and therefore, by default, books written by men will be disproportionately reviewed more in the media and consequently men win more awards than women. It is important for women to support each other.
There are local chapters all over the country, even a special chapter, The GUPPIES, with beginning writers who share publishing information and offer critique groups. The organization has been around since 1986 and has been thriving ever since. We are here to stay.
“You write alone, but you are not alone with Sisters,” as they say.
March 2017
An Interview with Ru Freeman
4 May
Ru Freeman's novel On Sal Mal Lane was called, by Cheryl Strayed,
Ru Freeman’s novel On Sal Mal Lane was called, by Cheryl Strayed, “Piercingly intelligent and shatter-your-heart profound.”
Ru Freeman was born in Colombo, Sri Lanka, and is the author of the novels Disobedient Girl and On Sal Mal Lane. She is also the editor of the forthcoming anthology, Extraordinary Rendition, a collection of the voices of American poets and writers speaking about America’s dis/engagement with Palestine. She has worked in the field of American and international humanitarian assistance and workers’ rights, and her political writing has appeared in English and in translation. Her creative work has appeared or is forthcoming in VQR, Guernica, World Literature Today and elsewhere. She is a contributing editorial board member of the Asian American Literary Review and a fellow of the Bread Loaf Writer’s Conference, Yaddo, Hedgebrook, and the Virginia Center for the Creative Arts. Freeman won the 2014 Janet Heidinger Kafka Prize for Fiction by an American Woman. She calls both Sri Lanka and America home.
To read an exercise on using an omniscient narrator and an excerpt from Freeman’s novel, On Sal Mal Laneclick here.
In this interview, Freeman discusses the challenges of explaining historical context in a novel and creating an omniscient narrator and the politics of Sri Lanka and On Sal Mal Lane.
Michael Noll
On Sal Mal Lane begins with a prologue that functions very much like the infamous prologue to Star Wars. It sets up the politics, geography, and history of the place—and also indicates that, in the story’s beginning at least, the major conflict is some miles away from the main characters. What was your approach to this prologue? Do you think it would have been written the same if you could assume that your readers knew a lot about Sri Lanka and its civil war?
Ru Freeman
I like the way you use that to discuss the book. The prologue in this form was added after I had written the first draft. The original prologue, several pages longer, focused mainly on the characters, and all of it eventually got whittled down to that last paragraph. When I finished writing the book, I felt that there was a sense of longer-term history that couldn’t be contained within the main text of the book without burdening it with those kinds of explanatory treatises on history that can kill momentum. It was necessary that people understood that there was this regional and international context, this history of colonization and brutality, but also that, in the end, none of those things were relevant to the daily lives of ordinary people like those who lived on Sal Mal Lane. As a way of tracing immediate history to a pivotal moment, I included the murder of Alfred Duraiappah and the call to war by Prabhakaran. Whether people knew this history or not, setting it down with those few brushstrokes helped to establish the voice of the narrator who is, to continue with your image, a Yoda like character who knew all that came before and all that was to come to pass and could maintain both warmth and distance from every composite part of the story—the human and the inanimate.
Michael Noll
The prologue also has this remarkable pair of sentences:
“And who, you might ask, am I? I am nothing more than the air that passes through these homes, lingering in the verandas where husbands and wives revisited their days and examined their prospects in comparison to those of their neighbors.”
In essence, you have created an omniscient narrator and then embodied it in something of the novel’s world. Was this a conscious decision—in response, perhaps, to readers or yourself wondering who was speaking? Or did these sentences arise spontaneously in an early draft?
Ru Freeman
Ru Freeman's novel On Sal Mal Lane
Ru Freeman’s novel On Sal Mal Lane “soars [with] its sensory beauty, language and humor,” according to a New York Times review.
It was an asking of myself as I tried to wrap my head around this voice that had come into being while writing the earlier version of the prologue, and the novel itself. It occurred to me that the narrator here was someone (or in this case perhaps something, the road), who was intimately familiar with the this place, with compassion for everyone, but a particularly keen fondness for two of the characters, Mr. Niles, and Nihil. In the scheme of things there is no one main character here, but the ties that bind these two are elevated above all the other bonds that form—and are broken— between the people of Sal Mal Lane. Why this voice lingered over those two characters got me thinking about the entity to whom the voice belonged. So, it was spontaneous, in one sense, but also deliberate.
Michael Noll
Each chapter gets a title. Obviously this is something that some books do and some don’t. What made you choose to title them?
Ru Freeman
In my first novel, I alternated the story between Biso (an older woman leaving an abusive husband, taking her three children with her on a journey that lasts just about 36 hours, all related in the first person), and Latha (a little girl who comes to live in a house as a companion to a girl her own age who lives there, and whose story covers about three decades and is told in the third person). When I began this book, I imagined that I’d write it by alternating the voices of the children, staying close to each in turn, sort of like what Barbara Kingsolver did with Poisonwood Bible. I must have written about a third of the book when I began to feel oppressed by this framework. I abandoned it as a strict guideline and began to simply write the story, though, as you can perhaps tell, I do concentrate on one or the other of the children as I go along, at least in certain parts. I decided to break the book up by year into sections, and then title the chapters. I enjoyed coming up with those titles. It’s not something people do too often, as you point out, but it is a lot of fun and if I’m having fun then the writing tends to be better than when I’m straining.
Michael Noll
At the risk of veering into politics, I was reading this novel when Sri Lanka held its presidential election in January, and so I couldn’t help holding the two events (the events of the novel and the election) side by side. In the novel, animosity is rising between Tamils and Sinhalese. Now, the war is over, and the minority groups (including the Tamils) who suffered during it have managed to vote out the president who claimed credit for ending the war. Do you imagine Sal Mal Lane today? Do the current events cause you to think about the years of the novel in a different light or way?
Ru Freeman
Freeman's website contains what is, perhaps, the most comprehensive list in existence of Sri Lankan writers.
There is never a veering into, I think. We are always situated quite firmly and centrally in the middle of politics. As far as the election goes, while it is true that many ordinary citizens came together to vote out the former president, there were machinations that went beyond Sri Lanka, including the United States, to bring the current one into power. When I hear the rhetoric from the new leadership, I don’t feel optimistic; the alignment of the new president is with the United National Party, which in its time of power reigned over the massacre of more than 60,000 youth. The language used is old, it panders to American interests, and it is, frankly, disorderly. That combination can be deadly in a country like Sri Lanka, with a highly educated, enfranchised, and engaged civil populace.
Be that as it may, the Sal Mal Lanes of my country never disappeared. They went on through another quarter century of war, they mended fences, came apart, celebrated and mourned. There was a weight felt by everybody as they did these things, that was only lifted in May 2009, when the war officially ended, when the walls and barricades and checkpoints were dismantled, and the soldiers went to work on reconstruction and other support work. Devi, therefore, was a symbol to me of a fragile beauty that underlined all life in Sri Lanka, as well as a stand-on for the country itself. How people dealt with her presence and absence was and is similar to how they dealt with what happened during those decades of war.
May 2015
%d bloggers like this:
|
#include <mbgl/annotation/annotation_manager.hpp>
#include <mbgl/annotation/annotation_source.hpp>
#include <mbgl/annotation/annotation_tile.hpp>
#include <mbgl/annotation/symbol_annotation_impl.hpp>
#include <mbgl/annotation/line_annotation_impl.hpp>
#include <mbgl/annotation/fill_annotation_impl.hpp>
#include <mbgl/annotation/style_sourced_annotation_impl.hpp>
#include <mbgl/style/style.hpp>
#include <mbgl/style/layers/symbol_layer.hpp>
#include <mbgl/style/layers/symbol_layer_impl.hpp>
#include <mbgl/storage/file_source.hpp>
#include <boost/function_output_iterator.hpp>
namespace mbgl {
using namespace style;
const std::string AnnotationManager::SourceID = "com.mapbox.annotations";
const std::string AnnotationManager::PointLayerID = "com.mapbox.annotations.points";
AnnotationManager::AnnotationManager(float pixelRatio)
: spriteAtlas({ 1024, 1024 }, pixelRatio) {
// This is a special atlas, holding only images added via addIcon, so we always treat it as
// loaded.
spriteAtlas.markAsLoaded();
}
AnnotationManager::~AnnotationManager() = default;
AnnotationID AnnotationManager::addAnnotation(const Annotation& annotation, const uint8_t maxZoom) {
AnnotationID id = nextID++;
Annotation::visit(annotation, [&] (const auto& annotation_) {
this->add(id, annotation_, maxZoom);
});
return id;
}
Update AnnotationManager::updateAnnotation(const AnnotationID& id, const Annotation& annotation, const uint8_t maxZoom) {
return Annotation::visit(annotation, [&] (const auto& annotation_) {
return this->update(id, annotation_, maxZoom);
});
}
void AnnotationManager::removeAnnotation(const AnnotationID& id) {
if (symbolAnnotations.find(id) != symbolAnnotations.end()) {
symbolTree.remove(symbolAnnotations.at(id));
symbolAnnotations.erase(id);
} else if (shapeAnnotations.find(id) != shapeAnnotations.end()) {
obsoleteShapeAnnotationLayers.insert(shapeAnnotations.at(id)->layerID);
shapeAnnotations.erase(id);
} else {
assert(false); // Should never happen
}
}
void AnnotationManager::add(const AnnotationID& id, const SymbolAnnotation& annotation, const uint8_t) {
auto impl = std::make_shared<SymbolAnnotationImpl>(id, annotation);
symbolTree.insert(impl);
symbolAnnotations.emplace(id, impl);
}
void AnnotationManager::add(const AnnotationID& id, const LineAnnotation& annotation, const uint8_t maxZoom) {
ShapeAnnotationImpl& impl = *shapeAnnotations.emplace(id,
std::make_unique<LineAnnotationImpl>(id, annotation, maxZoom)).first->second;
obsoleteShapeAnnotationLayers.erase(impl.layerID);
}
void AnnotationManager::add(const AnnotationID& id, const FillAnnotation& annotation, const uint8_t maxZoom) {
ShapeAnnotationImpl& impl = *shapeAnnotations.emplace(id,
std::make_unique<FillAnnotationImpl>(id, annotation, maxZoom)).first->second;
obsoleteShapeAnnotationLayers.erase(impl.layerID);
}
void AnnotationManager::add(const AnnotationID& id, const StyleSourcedAnnotation& annotation, const uint8_t maxZoom) {
ShapeAnnotationImpl& impl = *shapeAnnotations.emplace(id,
std::make_unique<StyleSourcedAnnotationImpl>(id, annotation, maxZoom)).first->second;
obsoleteShapeAnnotationLayers.erase(impl.layerID);
}
Update AnnotationManager::update(const AnnotationID& id, const SymbolAnnotation& annotation, const uint8_t maxZoom) {
Update result = Update::Nothing;
auto it = symbolAnnotations.find(id);
if (it == symbolAnnotations.end()) {
assert(false); // Attempt to update a non-existent symbol annotation
return result;
}
const SymbolAnnotation& existing = it->second->annotation;
if (existing.geometry != annotation.geometry) {
result |= Update::AnnotationData;
}
if (existing.icon != annotation.icon) {
result |= Update::AnnotationData | Update::AnnotationStyle;
}
if (result != Update::Nothing) {
removeAndAdd(id, annotation, maxZoom);
}
return result;
}
Update AnnotationManager::update(const AnnotationID& id, const LineAnnotation& annotation, const uint8_t maxZoom) {
auto it = shapeAnnotations.find(id);
if (it == shapeAnnotations.end()) {
assert(false); // Attempt to update a non-existent shape annotation
return Update::Nothing;
}
removeAndAdd(id, annotation, maxZoom);
return Update::AnnotationData | Update::AnnotationStyle;
}
Update AnnotationManager::update(const AnnotationID& id, const FillAnnotation& annotation, const uint8_t maxZoom) {
auto it = shapeAnnotations.find(id);
if (it == shapeAnnotations.end()) {
assert(false); // Attempt to update a non-existent shape annotation
return Update::Nothing;
}
removeAndAdd(id, annotation, maxZoom);
return Update::AnnotationData | Update::AnnotationStyle;
}
Update AnnotationManager::update(const AnnotationID& id, const StyleSourcedAnnotation& annotation, const uint8_t maxZoom) {
auto it = shapeAnnotations.find(id);
if (it == shapeAnnotations.end()) {
assert(false); // Attempt to update a non-existent shape annotation
return Update::Nothing;
}
removeAndAdd(id, annotation, maxZoom);
return Update::AnnotationData | Update::AnnotationStyle;
}
void AnnotationManager::removeAndAdd(const AnnotationID& id, const Annotation& annotation, const uint8_t maxZoom) {
removeAnnotation(id);
Annotation::visit(annotation, [&] (const auto& annotation_) {
this->add(id, annotation_, maxZoom);
});
}
std::unique_ptr<AnnotationTileData> AnnotationManager::getTileData(const CanonicalTileID& tileID) {
if (symbolAnnotations.empty() && shapeAnnotations.empty())
return nullptr;
auto tileData = std::make_unique<AnnotationTileData>();
AnnotationTileLayer& pointLayer = tileData->layers.emplace(PointLayerID, PointLayerID).first->second;
LatLngBounds tileBounds(tileID);
symbolTree.query(boost::geometry::index::intersects(tileBounds),
boost::make_function_output_iterator([&](const auto& val){
val->updateLayer(tileID, pointLayer);
}));
for (const auto& shape : shapeAnnotations) {
shape.second->updateTileData(tileID, *tileData);
}
return tileData;
}
void AnnotationManager::updateStyle(Style& style) {
// Create annotation source, point layer, and point bucket
if (!style.getSource(SourceID)) {
style.addSource(std::make_unique<AnnotationSource>());
std::unique_ptr<SymbolLayer> layer = std::make_unique<SymbolLayer>(PointLayerID, SourceID);
layer->setSourceLayer(PointLayerID);
layer->setIconImage({"{sprite}"});
layer->setIconAllowOverlap(true);
layer->setIconIgnorePlacement(true);
layer->impl->spriteAtlas = &spriteAtlas;
style.addLayer(std::move(layer));
}
for (const auto& shape : shapeAnnotations) {
shape.second->updateStyle(style);
}
for (const auto& layer : obsoleteShapeAnnotationLayers) {
if (style.getLayer(layer)) {
style.removeLayer(layer);
}
}
obsoleteShapeAnnotationLayers.clear();
}
void AnnotationManager::updateData() {
for (auto& tile : tiles) {
tile->setData(getTileData(tile->id.canonical));
}
}
void AnnotationManager::addTile(AnnotationTile& tile) {
tiles.insert(&tile);
tile.setData(getTileData(tile.id.canonical));
}
void AnnotationManager::removeTile(AnnotationTile& tile) {
tiles.erase(&tile);
}
void AnnotationManager::addImage(const std::string& id, std::unique_ptr<style::Image> image) {
spriteAtlas.addImage(id, std::move(image));
}
void AnnotationManager::removeImage(const std::string& id) {
spriteAtlas.removeImage(id);
}
double AnnotationManager::getTopOffsetPixelsForImage(const std::string& id) {
const style::Image* image = spriteAtlas.getImage(id);
return image ? -(image->image.size.height / image->pixelRatio) / 2 : 0;
}
} // namespace mbgl
|
1. Linguistic Background
The languages that are currently spoken in the Pacific
region can be divided broadly into three groups: the Australian and New Guinean
languages formed by people who participated in the region’s earliest migrations
over a period of 20,000-30,000 years starting several tens of thousands of years
ago, and the Austronesian languages spoken by Mongoloid people who migrated
from the Asian continent around 3,000 B.C. The region has numerous languages,
including 250 Aboriginal languages in Australia and 750 Papuan languages on
the island of New Guinea (including the Indonesian territory of Irian Jaya)
and neighboring areas. There are also 350 Austronesian languages in Melanesia,
20 in Polynesia, 12 in Micronesia and 100 in New Guinea (Comrie, Matthews, and
Polinsky 1996). There is wide variation not only among language groups, but
also among the families of languages. Few language families have been identified
among the languages of Australia and New Guinea using the methods of comparative
linguistics. Pacific languages are also characterized by the small size of speaker
populations and by the absence of dominant languages. However, there are usually
bilingual people who can speak or at least understand the languages of neighboring
populations, and it is believed that this situation has existed for a long time.
In terms of cultural factors, it appears that the diversification of languages
in the Pacific region was accelerated by the emblematic function of language
in the creation of a clear distinction between “ingroup” and “outgroup.”
The languages of New Guinea and the region around it show diverse linkages and wide variations between languages. The Austronesian languages of the Pacific region are mostly classified as Oceanian languages, while the Chamorro and Palau languages of Micronesia are classified into the languages of Western Malaya and Polynesia (WMP, Indonesian family), and the indigenous languages of Maluku and Irian Jaya in Eastern Indonesia into the Central Malayo-Polynesian (CMP) or the South Halmahera-West New Guinea (SHWNG) subgroups. In particular, there are strong similarities between the linguistic characteristics of the CMP and SHWNG languages and those of the Melanesian branch of the Oceanian languages. These linguistic conditions and characteristics are attributable to ethnic migrations within the region over a long period of time, accompanied by contacts and linguistic merging with indigenous Papuan people. Papuan languages are still found in parts of Indonesia, including Northern Halmahera and the islands of Pantar and Alor and central and eastern Timor in the Province of Nusa Tenggara. In New Guinea, contact with Papuan languages has caused some Austronesian languages to exhibit a word order change from subject-verb-object to subject-object-verb (Austronesian Type 2) (Sakiyama 1994).
2. Linguistic Strata
With the start of colonization by the European powers
in the nineteenth century, a new set of linguistic circumstances developed in
the region. First, pidgin languages based on European and Melanesian languages
gradually emerged as common languages. The establishment of plantations in Samoa
and in Queensland, Australia, which had concentrations of people who spoke Melanesian
languages, was important in providing breeding grounds for pidgin languages.
A pidgin language is formed from elements of the grammar of both contributing
languages, though the pidgin languages tend to be looked down upon from the
perspective of the more dominant of the two parent languages. The region’s
newly formed common languages, including Tok Pisin, Bislama, and Solomon Pidgin,
flourished after they were taken back to the homelands of the various speakers.
This was possible because Vanuatu, the Solomon Islands and Papua New Guinea
were all multilingual societies without dominant languages. The number of speakers
of pidgin languages increased rapidly in this environment. At the same time,
the continuing existence of ethnic minority languages came under threat.
Examples of pidgins that were creolized (adopted as mother languages in their own right) include Solomon Pijin, which eventually had over 1,000 speakers aged five and over (1976) in the Solomon Islands. Bislama, a mixture of over 100 indigenous languages grafted upon a base of English and French, is now spoken by almost the entire population of Vanuatu (170,000 in 1996) and is partially creolized. Of particular interest is the fact that a group of more than 1,000 people who emigrated to New Caledonia have adopted Bislama as their primary language. The situation in Papua New Guinea, which has a population of 4,300,000 (1996), is even more dramatic. By 1982 the number of people using Tok Pisin as their primary language had reached 50,000, while another 2,000,000 used it as a second language (Grimes 1996).
3. Minority Languages and Common Languages in the Pacific Region
The Atlas of the World’s Languages in Danger of Disappearing published by UNESCO (Wurm 1996) provides merely a brief overview of the current situation in Papua New Guinea, Australia, the Solomon Islands, and Vanuatu. There is no mention of Micronesia, New Caledonia, or Polynesia, presumably because of a lack of information resulting from the large number of languages in these areas. The following report covers areas and languages that I have researched and endangered languages covered by field studies carried out by Japanese researchers.
3.1 Belau (Palau), Micronesia
According to Belau (Palau) government statistics (1990),
the total population of 15,122 people includes 61 people living on outlying
islands in Sonsorol State, and 33 in Hatohobei (Tochobei) State. Apart from
the Sonsorol Islands, Sonsorol State also includes the islands of Fanah, Meril
and Pulo An. In addition to the Hatohobei language, the language mix on these
outlying islands also includes nuclear Micronesian (Chuukic) languages, which
are the core Oceanian languages spoken in the Carolines. They differ from Palauan,
which is an Indonesian language. To lump these languages together as the Sonsorol
languages with a total of 600 speakers (Wurm and Hattori 1981-83) is as inaccurate
as combining the Miyako dialects of Okinawa into a single classification.
The number of Chuukic speakers has declined steadily since these figures were compiled. Starting in the German colonial period of the early twentieth century, people have been relocated from these outlying islands to Echang on Arakabesan Island in Belau. Today there are several hundred of these people. Many of those born in the new location only speak Palauan. A study by S. Oda (1975) estimated that there were 50 speakers of Pulo Annian. The language of Meril continued to decline and has now become extinct.
From the early part of the twentieth century until the end of World War II, Micronesia was under Japanese rule, administered by the South Seas Mandate. Japanese was used as a common language, and its influence is still evident today. The linguistic data on Micronesia presented by Grimes (1996) is distorted by the fact that, while the number of English speakers is shown, no mention is made of Japanese. A study carried out in 1970 (Wurm, Mühlhäusler, and Tryon 1996) found that people aged 35 and over could speak basic Japanese. This group is equivalent to people aged 63 and over in 1998. An estimate based on Belau government statistics (1990) suggests that more than 1,000 of these people are still alive. In the State of Yap in the Federated States of Micronesia, where the percentage of females attending school is said to have been low, we can assume that the number of Japanese speakers has fallen below 500.
It has been suggested that if Japan had continued to rule Micronesia, Japanese would certainly have become the sole language in the region, and indigenous languages would have disappeared (Wurm, Mühlhäusler, and Tryon 1996). This seems an overly harsh appraisal of Japan’s language policy. Except in the schools, as a matter of fact no significant steps were taken to promote the use of Japanese. Micronesia previously had no common language for communication between different islands. Even today, old people from different islands use Japanese as a common language (Sakiyama 1995; Toki 1998). However, the role of this Japanese pidgin appears to have ended within a single generation, and in this sense it too is an endangered language. Pidgin Japanese continues to be used as a lingua franca by Taiwanese in their fifties and older (Wurm, Mühlhäusler, and Tryon 1996), and the number of speakers is estimated to have been 10,000 in 1993 (Grimes 1996).
3.2 Yap, Micronesia
Ngulu Atoll is situated between the Yap Islands and the Belau Islands. The Nguluwan language is a mixture of Yapese and Ulithian, which belongs to the Chuukic family. It has inherited the Ulithian phonetic system and a partial version of Yap grammar (Sakiyama 1982). Nguluwan appears to have evolved through bilingualism between Yapese and Ulithian, and to describe it as a dialect of Ulithian (Grimes 1996) is inappropriate. In 1980 there were 28 speakers. Even with the inclusion of people who had migrated to Guror on Yap Island, where the parent village is located, the number of speakers was fewer than 50. Speakers are being assimilated rapidly into the Yapese language and culture.
3.3 Maluku, Indonesia
The book Atlas Bahasa Tanah Maluku (Taber et al. 1996) covers 117 ethnic languages (Austronesian, Papuan), including numbers of speakers for each language, areas of habitation and migration, access routes, simple cultural information, and basic numbers and expressions. This work is especially valuable since it corrects inaccuracies and errors in the 1977 Classification and Index of the World's Languages by C. Y. L. Voegelin and F. M. Voegelin. It also distinguishes languages and dialects according to their a priori mutual intelligibility. Fifteen languages are listed as having fewer than 1,000 speakers. They include the Nakaela language of Seram, which has only 5 speakers, the Amahai and Paulohi languages, also of Seram, which are spoken by 50 people each, and the South Nuaulu and Yalahatan languages, which have 1,000 speakers each on Seram Island. The data, however, are not complete. For example, the Bajau language is not included, presumably because of the difficulty of accessing the various solitary islands where the Bajau people live. The author researched the Yalahatan language in 1997 and in 1998, and the Bajau language (2,000 speakers) on Sangkuwang Island in 1997.
3.4 Irian Jaya, Papua New Guinea
Detailed information about the names, numbers of speakers,
and research data for over 800 languages spoken in New Guinea and its coastal
regions can be found in the works by the Barrs (1978), Voorhoeve (1975), and
Wurm (1982). However, not only the minority languages but even the majority
languages other than a few have yet to be surveyed and researched adequately.
There are many languages for which vocabulary collection has yet to be undertaken.
It appears that dictionaries or grammars have been published for less than one-tenth
of the region’s languages. However, the gospel has been published in several
dozen languages using orthographies established by SIL. Papuan languages range
from those with substantial speaker populations, including Enga, Chimbu (Kuman),
and Dani, which are spoken by well over 100,000 people, to endangered languages
such as Abaga with 5 speakers (150 according to Wurm ), Makolkol with
7 (unknown according to Wurm), and Sene with under 10. There are very many languages
for which the number of speakers is unknown and more up-to-date information
is needed. Also, despite having substantially more than 1,000 speakers (Wurm
1982; Grimes 1996), Murik is in danger of extinction due to the creolization
of Tok Pisin (Foley 1986). Moreover, it is questionable whether the present
lists include all of the region’s languages.
Information about Irian Jaya is even sparser. A study on popular languages carried out by the author in 1984-85 revealed that Kuot (New Ireland), Taulil (New Britain), and Sko (Irian Jaya) all had several hundred speakers and that, in the case of Taulil in particular, an increasing number of young people were able to understand what their elders were saying but could no longer speak the language themselves. There has been a rapid shift to Kuanua, an indigenous language used in trade with neighboring Rabaul, which is replacing Taulil.
3.5 Solomon Islands, Melanesia
The total population of the Solomon Islands is 390,000 (1996). There are 63 Papuan, Melanesian, and Polynesian indigenous languages, of which only 37 are spoken by over 1,000 people (Grimes 1996). The Papuan Kazukuru languages (Guliguli, Doriri) of New Georgia, which were known to be endangered as early as 1931, have become extinct already, leaving behind just some scant linguistic information. The Melanesian Tanema and Vano languages of the Santa Cruz Islands and the Laghu language of the Santa Isabel Islands were extinct by 1990. This does not mean that the groups speaking them died out, but rather that the languages succumbed to the shift to Roviana, a trade language used in neighboring regions, or were replaced by Solomon Pijin (Sakiyama 1996).
3.6 Vanuatu, Melanesia
The situation in Vanuatu is very similar to that in the
Solomon Islands. The official view, written in Bislama, is as follows:
I gat sam ples long 110 lanwis evriwan so i gat bigfala lanwis difrens long Vanuatu. Pipol blong wan velej ol i toktok long olgeta bakegen evridei nomo long lanwis be i no Bislama, Inglis o Franis. (Vanuatu currently has 110 indigenous languages, which are all very different linguistically. On an everyday basis people in villages speak only their local languages, not Bislama, English, or French). (Vanuatu, 1980, Institute of Pacific Studies)
Among the Melanesian and Polynesian indigenous languages spoken by 170,000 people, or 93% of the total population (1996), there are many small minority tongues. These include Aore, which has only a single speaker (extinct according to Wurm and Hattori [1981-83]); Maragus and Ura (with 10 speakers each); Nasarian, and Sowa (with 20); and Dixon Reef, Lorediakarkar, Mafea, and Tambotalo (with 50). If languages with around 100 speakers are included, this category accounts for about one-half of the total number of languages (Grimes 1996). The spread of Bislama has had the effect of putting these languages in jeopardy.
3.7 New Caledonia, Melanesia
New Caledonia has a total population of 145,000 people,
of whom 62,000 are indigenous. As of 1981, there were 28 languages, all Melanesian
except for the one Polynesian language Uvean. The only languages with over 2,000
speakers are Cemuhi, Paicî, Ajië, and Xârâcùù, along with Dehu and Nengone,
which are spoken on the Loyalty Islands.
Dumbea (Paita), which is spoken by several hundred people, has been described by T. Shintani and Y. Paita (1983). And M. Osumi (1995) has described Tinrin, which has an estimated 400 speakers. Speakers of Tinrin are bilingual in Xârâcùù or Ajië. Nerë has 20 speakers and Arhö 10, while Waamwang, which had 3 speakers in 1946, is now reported to be extinct (Grimes 1996). Descendants of Javanese, who began to migrate to New Caledonia in the early part of the twentieth century, now number several thousand. The Javanese language spoken by these people, which has developed in isolation from the Javanese homeland, has attracted attention as a new pidgin language.
When Europeans first arrived in Australia in 1788, it is estimated that there were 700 different tribes in a population of 500,000-1,000,000 (Comrie, Matthews, and Polinsky 1996). By the 1830s Tasmanian had become extinct, and today the number of Aboriginal languages has fallen to less than one-half what it once was. However, T. Tsunoda left detailed records of the Warrungu language, the last speaker of which died in 1981, and the Djaru language, which has only 200 speakers (Tsunoda 1974, 1981). Yawuru, which belongs to the Nyulnyulan family, reportedly has fewer than 20 speakers, all aged in their sixties or older. The language is described by K. Hosokawa (1992).
The Pacific has been heavily crisscrossed by human migration
from ancient to modern times. All Pacific countries except the Kingdom of Tonga
were colonized. This historical background is reflected in the existence of
multilevel diglossia in all regions of the Pacific.
Depending on the generation, the top level of language in Micronesia is either English (the official language) or pidgin Japanese (used as a lingua franca among islands). The next level is made up of the languages of major islands that exist as political units, such as Palauan, Yapese and Ponapean. On the lowest level are the various ethnic languages spoken mainly on solitary islands.
In the Maluku Islands of Indonesia, local Malay languages such as Ambonese Malay, North Maluku Malay and Bacanese Malay, form a layer beneath the official language, Indonesian. Under them are the dominant local languages, such as Hitu, which is spoken by 15,000 people on Ambon Island, and Ternate and Tidore, which are spoken in the Halmahera region. These are important as urban languages. On the lowest level are the various vernaculars.
In Papua New Guinea, standard English forms the top level, followed by Papua New Guinean English. Tok Pisin and Hiri Motu are used as common languages among the various ethnic groups. Beneath these layers are the regional or occupational common languages. For example, Hiri Motu is used as the law enforcement lingua franca in coastal areas around the Gulf of Papua, Yabem as a missionary language along the coast of the Huon Gulf, and Malay as a trade language in areas along the border with Indonesia. On the next level are the ethnic and tribal languages used on a day-to-day basis.
An example of a similar pattern in Polynesia can be found in Hawaii, where English and Hawaiian English rank above Da Kine Talk or Pidgin To Da Max, which are mixtures of English and Oceanic languages and are used as common languages among the various Asian migrants who have settled in Hawaii. Beneath these are ethnic languages, including Hawaiian and the various immigrant languages, such as a common Japanese based on the Hiroshima dialect, as well as Cantonese, Korean, and Tagalog.
All of the threatened languages are in danger because of their status as indigenous minority languages positioned at the lowest level of the linguistic hierarchy. Reports to date have included little discussion of the multilevel classification of linguistic strata from a formal linguistic perspective. It will be necessary in the future to examine these phenomena from the perspectives of sociolinguistics or linguistic anthropology.
Barr, Donald F., and Sharon G. Barr. 1978. Index of Irian Jaya Languages. Prepublication draft. Abepura, Indonesia: Cenderawashih University and Summer Institute of Linguistics.
Comrie, Bernard, Stephan Matthews, and Maria Polinsky. 1996. The Atlas of Languages. New York: Chackmark Books.
Foley, William A. 1986. The Papuan Languages of New Guinea. Cambridge, New York: Cambridge University Press.
Grimes, Barbara F., ed. 1996. Ethnologue: Languages of the World. Dallas: International Academic Bookstore.
Hosokawa, Komei. 1992. The Yawuru language of West Kimberley: A meaning-based description. Ph.D. diss., Australian National University.
Oda, Sachiko. 1977. The Syntax of Pulo Annian. Ph. D. diss., University of Hawaii.
Osumi, Midori. 1995. Tinrin grammar. Oceanic Linguistics Special Publication, No. 25. Honolulu: University of Hawaii Press.
Sakiyama, Osamu. 1982. The characteristics of Nguluwan from the viewpoint of language contact. In Islanders and Their Outside World.Aoyagi, Machiko, ed. Tokyo: Rikkyo University.
---. 1994. Hirimotu go no ruikei: jijun to gochishi (Affix order and postpositions in Hiri Motu: A cross-linguistic survey). Bulletin of the National Museum of Ethnology,vol. 19 no. 1: 1-17.
---. 1995. Mikuroneshia Berau no pijin ka nihongo (Pidginized Japanese in Belau, Micronesia). Shiso no kagaku, vol. 95 no. 3: 44-52.
---. 1996. Fukugouteki na gengo jokyo (Multilingual situation of the Solomon Islands). In Soromon shoto no seikatsu shi: bunka, rekishi, shakai (Life History in the Solomons: Culture, history and society). Akimichi, Tomoya et al, eds. Tokyo: Akashi shoten.
Shintani, Takahiko and Yvonne Païta. 1990. Grammaire de la Langue de Païta. Nouméa, New Caledonia: Société d'études historiques de la Nouvelle-Calédonie.
Taber, Mark and et al. 1996. Atlas bahasa tanah Maluku (Maluku Languages Atlas). Ambon, Indonesia: Summer Institute of Linguistics and Pusat Pengkajian dan Pengembangan Maluku, Pattimura University.
Toki, Satoshi, ed. 1998. The remnants of Japanese in Micronesia. Memoirs of the Faculty of Letters, Osaka University, Vol. 38.
Tsunoda, Tasaku. 1974. A grammar of the Warrungu language, North Queensland. Master's thesis, Monash University.
---. 1981. The Djaru Language of Kimberley, Western Australia. Pacific Linguistics, ser. B, No. 78. Canberra: Australian National University.
Voorhoeve, C. L. 1975. Languages of Irian Jaya: Checklist, Preliminary classification, language maps, wordlists. Canberra: Australian National University.
Wurm, Stephen A. 1982. Papuan Languages of Oceania. Tübingen: Gunter Narr Verlag.
---. and Shiro Hattori, eds. 1981-83. Language Atlas of the Pacific Area. Pacific Linguistics, ser. C, No. 66-67. Canberra: Australian National University.
---, Peter Mühlhäusler, and Darrel T. Tryon. 1996. Atlas of languages of intercultural communication in the Pacific, Asia, and the Americas. 3 vols. Trends in Linguistics. Documentation 13. New York: Mouton de Gruyter.
*Translation of the author’s essay “Taiheiyo chiiki no kiki gengo”, Gekkan Gengo, Taishukan Publishing Co., 28(2), 102-11, 1999, with the permission of the publisher.
Any comments and suggestions to [email protected]
|
#include "stdafx.h"
#include "Canvas.h"
CCanvas::CCanvas()
{
}
void CCanvas::DrawLine(SPoint a, SPoint b)
{
std::cout << "Line: " << a << " - " << b << std::endl;
}
void CCanvas::DrawEllipse(int l, int t, int w, int h)
{
std::cout << "Draw Ellipse" << std::endl;
}
void CCanvas::FillEllipse(int l, int t, int w, int h, SColor color)
{
std::cout << "Fill Ellipse" << std::endl;
}
void CCanvas::FillPolygon(const std::vector<SPoint>& points, SColor color)
{
std::cout << "Fill Polygon" << std::endl;
}
void CCanvas::SetPenColor(SColor color)
{
}
void CCanvas::SetPenThickness(unsigned thinckness)
{
}
void CCanvas::SetFillColor(SColor color)
{
}
|
the energy [r]evolution
The climate change imperative demands nothing short of an Energy [R]evolution. The expert consensus is that this fundamental shift must begin immediately and be well underway within the next ten years in order to avert the worst impacts. What is needed is a complete transformation of the way we produce, consume and distribute energy, while at the same time maintaining economic growth. Nothing short of such a revolution will enable us to limit global warming to less than a rise in temperature of 2° Celsius, above which the impacts become devastating.
Current electricity generation relies mainly on burning fossil fuels, with their associated CO2 emissions, in very large power stations which waste much of their primary input energy. More energy is lost as the power is moved around the electricity grid network and converted from high transmission voltage down to a supply suitable for domestic or commercial consumers. The system is innately vulnerable to disruption: localised technical, weather-related or even deliberately caused faults can quickly cascade, resulting in widespread blackouts. Whichever technology is used to generate electricity within this old fashioned configuration, it will inevitably be subject to some, or all, of these problems. At the core of the Energy [R]evolution there therefore needs to be a change in the way that energy is both produced and distributed.
4.1 key principles
the energy [r]evolution can be achieved by adhering to five key principles:
1.respect natural limits – phase out fossil fuels by the end of this century We must learn to respect natural limits. There is only so much carbon that the atmosphere can absorb. Each year humans emit over 25 billion tonnes of carbon equivalent; we are literally filling up the sky. Geological resources of coal could provide several hundred years of fuel, but we cannot burn them and keep within safe limits. Oil and coal development must be ended. The global Energy [R]evolution scenario has a target to reduce energy related CO2 emissions to a maximum of 10 Gigatonnes (Gt) by 2050 and phase out fossil fuels by 2085.
2.equity and fairness As long as there are natural limits there needs to be a fair distribution of benefits and costs within societies, between nations and between present and future generations. At one extreme, a third of the world’s population has no access to electricity, whilst the most industrialised countries consume much more than their fair share.
The effects of climate change on the poorest communities are exacerbated by massive global energy inequality. If we are to address climate change, one of the core principles must be equity and fairness, so that the benefits of energy services – such as light, heat, power and transport – are available for all: north and south, rich and poor. Only in this way can we create true energy security, as well as the conditions for genuine human wellbeing.
The Advanced Energy [R]evolution scenario has a target to achieve energy equity as soon as technically possible. By 2050 the average per capita emission should be between 1 and 2 tonnes of CO2.
3.implement clean, renewable solutions and decentralise energy systems. There is no energy shortage. All we need to do is use existing technologies to harness energy effectively and efficiently. Renewable energy and energy efficiency measures are ready, viable and increasingly competitive. Wind, solar and other renewable energy technologies have experienced double digit market growth for the past decade.
Just as climate change is real, so is the renewable energy sector. Sustainable decentralised energy systems produce less carbon emissions, are cheaper and involve less dependence on imported fuel. They create more jobs and empower local communities. Decentralised systems are more secure and more efficient. This is what the Energy [R]evolution must aim to create.
To stop the earth’s climate spinning out of control, most of the world’s fossil fuel reserves – coal, oil and gas – must remain in the ground. Our goal is for humans to live within the natural limits of our small planet.
4.decouple growth from fossil fuel use Starting in the developed countries, economic growth must be fully decoupled from fossil fuel usage. It is a fallacy to suggest that economic growth must be predicated on their increased combustion.
We need to use the energy we produce much more efficiently, and we need to make the transition to renewable energy and away from fossil fuels quickly in order to enable clean and sustainable growth.
5.phase out dirty, unsustainable energyWe need to phase out coal and nuclear power. We cannot continue to build coal plants at a time when emissions pose a real and present danger to both ecosystems and people. And we cannot continue to fuel the myriad nuclear threats by pretending nuclear power can in any way help to combat climate change. There is no role for nuclear power in the Energy [R]evolution.
|
/*
Phobos 3d
May 2011
Copyright (c) 2005-2011 Bruno Sanches http://code.google.com/p/phobos3d
This software is provided 'as-is', without any express or implied warranty.
In no event will the authors be held liable for any damages arising from the use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it freely,
subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.
*/
#include "Phobos/Game/Things/Thing.h"
#include <Phobos/Exception.h>
#include "Phobos/Game/WorldManager.h"
#include "Phobos/Game/Things/SignalManager.h"
namespace Phobos
{
namespace Game
{
namespace Things
{
Thing::Thing(const String_t &name, UInt32_t flags):
Node(name, flags),
fFixedUpdateEnabled(false),
fUpdateEnabled(false)
{
}
Thing::Thing(const Char_t *name, UInt32_t flags):
Node(name, flags),
fFixedUpdateEnabled(false),
fUpdateEnabled(false)
{
}
Thing::~Thing()
{
SignalManager::GetInstance().CancelEvents(*this);
if(fFixedUpdateEnabled || fUpdateEnabled)
{
WorldManager &world = WorldManager::GetInstance();
if(fFixedUpdateEnabled)
world.RemoveFromFixedUpdateList(*this);
if(fUpdateEnabled)
world.RemoveFromUpdateList(*this);
}
}
void Thing::FixedUpdate()
{
this->OnFixedUpdate();
}
void Thing::Update()
{
this->OnUpdate();
}
void Thing::EnableFixedUpdate()
{
if(fFixedUpdateEnabled)
return;
WorldManager::GetInstance().AddToFixedUpdateList(*this);
fFixedUpdateEnabled = true;
}
void Thing::EnableUpdate()
{
if(fUpdateEnabled)
return;
WorldManager::GetInstance().AddToUpdateList(*this);
fUpdateEnabled = true;
}
void Thing::DisableFixedUpdate()
{
if(!fFixedUpdateEnabled)
return;
WorldManager::GetInstance().RemoveFromFixedUpdateList(*this);
fFixedUpdateEnabled = false;
}
void Thing::DisableUpdate()
{
if(!fUpdateEnabled)
return;
WorldManager::GetInstance().RemoveFromUpdateList(*this);
fUpdateEnabled = false;
}
#if 0
EntityOutputManager::EntityOutputManager()
{
//empty
}
void EntityOutputManager::AddConnector(const String_t &name, OutputProcConnector_t proc)
{
ConnectorsMap_t::iterator it = mapConnectors.lower_bound(name);
if((it != mapConnectors.end()) && (!mapConnectors.key_comp()(name, it->first)))
{
std::stringstream stream;
stream << "Output " << name << " already exists.";
PH_RAISE(OBJECT_ALREADY_EXISTS_EXCEPTION, "[EntityOutputManager::AddConnector]", stream.str());
}
mapConnectors.insert(it, std::make_pair(name, proc));
}
void EntityOutputManager::Connect(EntityIO &outputOwner, const std::string &outputName, EntityIO &inputOwner, InputProc_t input)
{
ConnectorsMap_t::iterator it = mapConnectors.find(outputName);
if(it == mapConnectors.end())
{
std::stringstream stream;
stream << "Output " << outputName << " not found.";
PH_RAISE(OBJECT_NOT_FOUND_EXCEPTION, "[EntityOutputManager::Connect]", stream.str());
}
(outputOwner.*(it->second))(inputOwner, input);
}
EntityInputManager::EntityInputManager()
{
//empty
}
void EntityInputManager::AddSlot(const String_t &name, InputProc_t proc)
{
InputMap_t::iterator it = mapInputs.lower_bound(name);
if((it != mapInputs.end()) && (!mapInputs.key_comp()(name, it->first)))
{
std::stringstream stream;
stream << "Output " << name << " already exists.";
PH_RAISE(OBJECT_ALREADY_EXISTS_EXCEPTION, "[EntityInputManager::AddSlot]", stream.str());
}
mapInputs.insert(it, std::make_pair(name, proc));
}
#endif
}
}
}
|
Shouting Fire Part 3
In the documentary Shouting Fire, the last clip questions about the right to protest. The constitution clearly states in the first amendment that gives “the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.” The protest that took place in in New York during the Republican Convention in 2004 was not harming to anyone. I think the NY police force took it too far because they worsened the situation by arresting the protests, who were protesting peacefully. 1801 protestors were arrested for expressing their freedom of speech. The documentary make an important point that in midst of war people’s rights tend to shrink.
After watching the entire documentary, what surprises me is how people allow the government to get away with this. We should understand by now that in order to have complete freedom of speech we would have to allow people to express their freedom any way they want as long as it doesn’t break any laws. This could mean that we would have to tolerate people like Chase Harper to express their opinion, even if their opinion wrong. Taking away rights from people like Ward Churchill, Martin Garbus, Debbie Almontaser, and Daniel Ellsburg was very wrong because the government fired them for expressing what they believe in. When the government takes such a drastic measure, it puts fear in people that they can’t chose to believe in something that goes against what the government believes in.
I agree with Garbus when he says in the end of the documentary that if you don’t fight for your freedom every day that you’re going to lose it. It is the citizens’ job to stand up against the unfair treatments the people in the documentary faced. The documentary really changed the way I look at our freedom of speech and how we might actually not be able to say anything we want. Before we make accusations we should know about the entire situation first. Everyone treated Churchill and the other like they committed a huge crime, when they were simply just expressing their belief.
Shouting Fire Part 2
In the story about Martin Garbus, Garbus thought that the American Civil Liberties Union (ACLU) should take the case of the Nazi march in Chicago. This story shows the down side of freedom of speech because Garbus knew that if the ACLU didn’t take the case than our freedom of speech is opposing a certain group of people. Like Garbus said, “If you are going to defend free speech, you also have to defend the freedom of speech of people you hate.” Even though you’re personal belief might not be the same as the other party but they still have the right to have their own beliefs. I think it was wrong of people to show hatred towards Garbus because I don’t think that he would personally favor what the Nazis were doing but he did it for the sake of our freedom. He had family members who died in the Holocaust, so that shows that he knew how the Jews felt about the situation.
In the story about Chase Harper relates to the other story that deals with freedom of speech. It raises the issue of whether there should a limit of on it. I think at freedom of speech should not be allowed when it jeopardizes someone’s life. When Harper wrote that message on his shirt, the message makes others who are gay or lesbian feel like they are doing something wrong. There is also the point of how our freedom of speech is limited in schools. If you were caught using foul language in school you would get in trouble, but don’t we have the right to say what we want. School is a place where kids learn to behave a certain way so when Harper wears his t-shirt, it shouldn’t be in a place where is kids are learning.
Most Dangerous Man in America: Daniel Ellsberg and the Pentagon Papers
The documentary Most Dangerous Man in America: Daniel Ellsberg and the Pentagon Papers was a very insightful film. In my AP U.S. history class we barely have the time to cover anything after World War II and the only way we learned about history after WWII was by reading All the Presidents Men, which only covered the Watergate Scandal. I knew that the Vietnam War was wrong but I never learned the details behind why it was wrong because some people justified it by stating that the war helped suppress communism. This film reveals what really happened in Vietnam.
This film sheds light on how the public has the right to know what is going on. Out of all the men working in Washington who were involved in the Vietnam War only one man had the guts to come out and tell the truth. Daniel Ellsberg was that man; he put his career and life on the line when he decided to release the Pentagon Papers. One of the things that shocked me was how four of the presidents knew how bad the war was going yet they continued lying to the public about the U.S. winning the war. They were also responsible for ruining South Vietnam by preventing free elections and supported corrupt regimes.
The film was narrated by Ellsberg and it included actual footage from the event and it also has animation that showed what was happening in the scene. The film was easy to understand and it helped me realize that it is the job of the pubic to bring down the corruption in our government and that we can’t just hope for someone like Daniel Ellsberg to always come out and reveal to us the truth.
One of the other things I liked was the side story of how Daniel Ellsberg and his wife meet. That side of the story kind of cools us down when the story gets too serious. This documentary seems like a regular movie because of the variety of plots it includes like romance, action and lots of drama.
I think that Daniel Ellsberg should be considered a hero because without his help there would have been so many more lives ended. He was the right person to reveal the truth because he actually went to Vietnam to see how America was doing in the war. One of my favorite the part of the film was when he realized that he had to reveal the papers when his friend Randy Kehler was going to jail and he said he was happy to join his friends and that people will continue to resist the war. That part was made me realize how serious the situation was because so many people were willing to go to prison for many years as along as the war is ended.
This film also made me thankful of the first amendment that says freedom of press. So many newspapers were willing to put their newspaper company on the line to publish the Pentagon Papers even when the government had forbidden it. This demonstrates the importance of truth and why we should allow everyone to know what is happening to their country. I think that it was wrong of people to call Ellsberg a “whistle blower” because it is clear that he did the right thing even when no one else had the guts to do it.
Rating: 4 out of 5
Debbie Almontaser Controversy
The controversy involving Debbie Almontaser was about a word that was written on the t-shirts of the group Arab Women Active in the Arts and Media (AWAAM), where she worked at. She was forced to resign because in the interview with the New York Post she talked about the t-shirts which the board of education told her not to. This entire situation I think shows how since the 9/11 situation Arabs and Muslims were looked upon differently. Debbie Almontaser had a history of working in a lot of organizations where she had helped people of different religion come together. Everyone seemed to have turned on her even though it was clear how innocent she was. I was surprised that Mayor Bloomberg had asked her to resign even though he knew her and awarded her for a lot of things. Her son was a national guardsman and reporters asked if she didn’t believe 9/11 happened.
This situation is kind of similar to the Ward Churchill controversy because it relates to how after 9/11 people had to be more mindful of what they were saying. Both of them didn’t deserve what happened to them. The price they had to pay was to resign from their jobs. I think that both were put in a situation where they were helpless because 9/11 is fresh in peoples mind and they wanted to project their hate somewhere. In the end Debbie Almontaser filed a law suit for being forced to resign and she ended up winning up to $300,000, which I think is fair.
Ward Churchill Controversy
I believe that it was wrong of the University of Colorado to fire Ward Churchill. The way the University charged Churchill on charges that doesn’t relate to what Churchill wrote about on the 9/11 attack. It proves the violation of his freedom of speech. Churchill wrote in his essay of how it was America’s fault that the 9/11 attack happened. When the right-wing Republicans found out about it they didn’t want Churchill to teach stuff like that to the students. I think that his view of the 9/11 attack shouldn’t be the reason why he should be fired from teaching since he is the professor of ethnic studies. He mentioned in the documentary of America’s bloody history like in the Contra War, No Gun Ri, Indochina, and Wounded Knee Massacre, so I understand why he thinks that because of America’s bad history with foreign countries why he blames America for the 9/11 attack.
I agree with Churchill that this controversy is like McCarthyism because during that time a lot of people were accused of being a communist when they spoke out against the government. Like the McCarthyism era the American Council of Trustees and Alumni (ACTA) made a list called “How Many Ward Churchills?” where they charged professors who taught the similar stuff that Churchill taught which was over 60,000 professors. This shows how the people are putting this situation out of proportion. Near the end of the video when Churchill was being dismissed for academic misconduct there were students who were booing, this shows how the students also believed that it was wrong to fired Churchill.
Even though I think that Churchill shouldn’t have been fired but I do think that he shouldn’t have said in his lectures that the people who worked in the World Trade Center deserved what happened to them. Because of that reason he probably caused everyone to make this a big deal. Especially since it was after a few years from the 9/11 attack, where more Americans thought that the first amendment is not restricted enough. We as the citizens of the United States have the right to express our beliefs even if those beliefs don’t always please everyone.
|
#include "shap_values.h"
#include "util.h"
#include "shap_exact.h"
#include <catboost/private/libs/algo/features_data_helpers.h>
#include <catboost/private/libs/algo/index_calcer.h>
#include <catboost/libs/data/features_layout.h>
#include <catboost/libs/helpers/exception.h>
#include <catboost/libs/loggers/logger.h>
#include <catboost/libs/logging/profile_info.h>
#include <catboost/private/libs/options/restrictions.h>
#include <util/generic/algorithm.h>
#include <util/generic/cast.h>
#include <util/generic/utility.h>
#include <util/generic/ymath.h>
#include <catboost/libs/model/cpu/quantization.h>
using namespace NCB;
namespace {
struct TFeaturePathElement {
int Feature;
double ZeroPathsFraction;
double OnePathsFraction;
double Weight;
TFeaturePathElement() = default;
TFeaturePathElement(int feature, double zeroPathsFraction, double onePathsFraction, double weight)
: Feature(feature)
, ZeroPathsFraction(zeroPathsFraction)
, OnePathsFraction(onePathsFraction)
, Weight(weight)
{
}
};
} //anonymous
static TVector<TFeaturePathElement> ExtendFeaturePath(
const TVector<TFeaturePathElement>& oldFeaturePath,
double zeroPathsFraction,
double onePathsFraction,
int feature
) {
const size_t pathLength = oldFeaturePath.size();
TVector<TFeaturePathElement> newFeaturePath(pathLength + 1);
Copy(oldFeaturePath.begin(), oldFeaturePath.begin() + pathLength, newFeaturePath.begin());
const double weight = pathLength == 0 ? 1.0 : 0.0;
newFeaturePath[pathLength] = TFeaturePathElement(feature, zeroPathsFraction, onePathsFraction, weight);
for (int elementIdx = pathLength - 1; elementIdx >= 0; --elementIdx) {
newFeaturePath[elementIdx + 1].Weight += onePathsFraction * newFeaturePath[elementIdx].Weight * (elementIdx + 1) / (pathLength + 1);
newFeaturePath[elementIdx].Weight = zeroPathsFraction * newFeaturePath[elementIdx].Weight * (pathLength - elementIdx) / (pathLength + 1);
}
return newFeaturePath;
}
static TVector<TFeaturePathElement> UnwindFeaturePath(
const TVector<TFeaturePathElement>& oldFeaturePath,
size_t eraseElementIdx)
{
const size_t pathLength = oldFeaturePath.size();
CB_ENSURE(pathLength > 0, "Path to unwind must have at least one element");
TVector<TFeaturePathElement> newFeaturePath(
oldFeaturePath.begin(),
oldFeaturePath.begin() + pathLength - 1);
for (size_t elementIdx = eraseElementIdx; elementIdx < pathLength - 1; ++elementIdx) {
newFeaturePath[elementIdx].Feature = oldFeaturePath[elementIdx + 1].Feature;
newFeaturePath[elementIdx].ZeroPathsFraction = oldFeaturePath[elementIdx + 1].ZeroPathsFraction;
newFeaturePath[elementIdx].OnePathsFraction = oldFeaturePath[elementIdx + 1].OnePathsFraction;
}
const double onePathsFraction = oldFeaturePath[eraseElementIdx].OnePathsFraction;
const double zeroPathsFraction = oldFeaturePath[eraseElementIdx].ZeroPathsFraction;
double weightDiff = oldFeaturePath[pathLength - 1].Weight;
if (!FuzzyEquals(1 + onePathsFraction, 1 + 0.0)) {
for (int elementIdx = pathLength - 2; elementIdx >= 0; --elementIdx) {
double oldWeight = newFeaturePath[elementIdx].Weight;
newFeaturePath[elementIdx].Weight = weightDiff * pathLength
/ (onePathsFraction * (elementIdx + 1));
weightDiff = oldWeight
- newFeaturePath[elementIdx].Weight * zeroPathsFraction * (pathLength - elementIdx - 1)
/ pathLength;
}
} else {
for (int elementIdx = pathLength - 2; elementIdx >= 0; --elementIdx) {
newFeaturePath[elementIdx].Weight *= pathLength
/ (zeroPathsFraction * (pathLength - elementIdx - 1));
}
}
return newFeaturePath;
}
static void UpdateShapByFeaturePath(
const TVector<TFeaturePathElement>& featurePath,
const double* leafValuesPtr,
size_t leafId,
int approxDimension,
bool isOblivious,
double averageTreeApprox,
double conditionFeatureFraction,
TVector<TShapValue>* shapValuesInternal
) {
const int approxDimOffset = isOblivious ? approxDimension : 1;
for (size_t elementIdx = 1; elementIdx < featurePath.size(); ++elementIdx) {
const TVector<TFeaturePathElement> unwoundPath = UnwindFeaturePath(featurePath, elementIdx);
double weightSum = 0.0;
for (const TFeaturePathElement& unwoundPathElement : unwoundPath) {
weightSum += unwoundPathElement.Weight;
}
const TFeaturePathElement& element = featurePath[elementIdx];
const auto sameFeatureShapValue = FindIf(
shapValuesInternal->begin(),
shapValuesInternal->end(),
[element](const TShapValue& shapValue) {
return shapValue.Feature == element.Feature;
}
);
const double coefficient =
conditionFeatureFraction * weightSum * (element.OnePathsFraction - element.ZeroPathsFraction);
if (sameFeatureShapValue == shapValuesInternal->end()) {
shapValuesInternal->emplace_back(element.Feature, approxDimension);
for (int dimension = 0; dimension < approxDimension; ++dimension) {
double value = coefficient * (leafValuesPtr[leafId * approxDimOffset + dimension] - averageTreeApprox);
shapValuesInternal->back().Value[dimension] = value;
}
} else {
for (int dimension = 0; dimension < approxDimension; ++dimension) {
double addValue = coefficient * (leafValuesPtr[leafId * approxDimOffset + dimension] - averageTreeApprox);
sameFeatureShapValue->Value[dimension] += addValue;
}
}
}
}
TConditionsFeatureFraction::TConditionsFeatureFraction(
const TMaybe<TFixedFeatureParams>& fixedFeatureParams,
int combinationClass,
double conditionFeatureFraction,
double hotCoefficient,
double coldCoefficient
) {
HotConditionFeatureFraction = conditionFeatureFraction;
ColdConditionFeatureFraction = conditionFeatureFraction;
if (fixedFeatureParams.Defined() && combinationClass == fixedFeatureParams->Feature) {
switch (fixedFeatureParams->FixedFeatureMode) {
case TFixedFeatureParams::EMode::FixedOn: {
ColdConditionFeatureFraction = 0;
break;
}
case TFixedFeatureParams::EMode::FixedOff: {
HotConditionFeatureFraction *= hotCoefficient;
ColdConditionFeatureFraction *= coldCoefficient;
break;
}
default: {
Y_UNREACHABLE();
}
}
}
}
static void ExtendFeaturePathIfFeatureNotFixed(
const TMaybe<TFixedFeatureParams>& fixedFeatureParams,
const TVector<TFeaturePathElement>& oldFeaturePath,
double zeroPathsFraction,
double onePathsFraction,
int feature,
TVector<TFeaturePathElement>* featurePath
) {
if (!fixedFeatureParams.Defined() ||
(fixedFeatureParams->FixedFeatureMode == TFixedFeatureParams::EMode::NotFixed || fixedFeatureParams->Feature != feature)) {
*featurePath = ExtendFeaturePath(
oldFeaturePath,
zeroPathsFraction,
onePathsFraction,
feature);
} else {
const size_t pathLength = oldFeaturePath.size();
*featurePath = TVector<TFeaturePathElement>(oldFeaturePath.begin(), oldFeaturePath.begin() + pathLength);
}
}
static void CalcObliviousInternalShapValuesForLeafRecursive(
const TModelTrees& forest,
const TVector<int>& binFeatureCombinationClass,
size_t documentLeafIdx,
size_t treeIdx,
int depth,
const TVector<TVector<double>>& subtreeWeights,
size_t nodeIdx,
const TVector<TFeaturePathElement>& oldFeaturePath,
double zeroPathsFraction,
double onePathsFraction,
int feature,
bool calcInternalValues,
const TMaybe<TFixedFeatureParams>& fixedFeatureParams,
const double conditionFeatureFraction,
TVector<TShapValue>* shapValuesInternal,
double averageTreeApprox
) {
if (FuzzyEquals(1.0 + conditionFeatureFraction, 1.0 + 0.0)) {
return;
}
TVector<TFeaturePathElement> featurePath;
ExtendFeaturePathIfFeatureNotFixed(
fixedFeatureParams,
oldFeaturePath,
zeroPathsFraction,
onePathsFraction,
feature,
&featurePath
);
if (depth == forest.GetTreeSizes()[treeIdx]) {
UpdateShapByFeaturePath(
featurePath,
forest.GetFirstLeafPtrForTree(treeIdx),
nodeIdx,
forest.GetDimensionsCount(),
/*isOblivious*/ true,
averageTreeApprox,
conditionFeatureFraction,
shapValuesInternal
);
} else {
double newZeroPathsFraction = 1.0;
double newOnePathsFraction = 1.0;
const size_t remainingDepth = forest.GetTreeSizes()[treeIdx] - depth - 1;
const int combinationClass = binFeatureCombinationClass[
forest.GetTreeSplits()[forest.GetTreeStartOffsets()[treeIdx] + remainingDepth]
];
const auto sameFeatureElement = FindIf(
featurePath.begin(),
featurePath.end(),
[combinationClass](const TFeaturePathElement& element) {
return element.Feature == combinationClass;
}
);
if (sameFeatureElement != featurePath.end()) {
const size_t sameFeatureIndex = sameFeatureElement - featurePath.begin();
newZeroPathsFraction = featurePath[sameFeatureIndex].ZeroPathsFraction;
newOnePathsFraction = featurePath[sameFeatureIndex].OnePathsFraction;
featurePath = UnwindFeaturePath(featurePath, sameFeatureIndex);
}
const bool isGoRight = (documentLeafIdx >> remainingDepth) & 1;
const size_t goNodeIdx = nodeIdx * 2 + isGoRight;
const size_t skipNodeIdx = nodeIdx * 2 + !isGoRight;
const double hotCoefficient = subtreeWeights[depth + 1][goNodeIdx] / subtreeWeights[depth][nodeIdx];
const double coldCoefficient = subtreeWeights[depth + 1][skipNodeIdx] / subtreeWeights[depth][nodeIdx];
TConditionsFeatureFraction conditionsFeatureFraction{
fixedFeatureParams,
combinationClass,
conditionFeatureFraction,
hotCoefficient,
coldCoefficient
};
if (!FuzzyEquals(1 + subtreeWeights[depth + 1][goNodeIdx], 1 + 0.0)) {
double newZeroPathsFractionGoNode = newZeroPathsFraction * hotCoefficient;
CalcObliviousInternalShapValuesForLeafRecursive(
forest,
binFeatureCombinationClass,
documentLeafIdx,
treeIdx,
depth + 1,
subtreeWeights,
goNodeIdx,
featurePath,
newZeroPathsFractionGoNode,
newOnePathsFraction,
combinationClass,
calcInternalValues,
fixedFeatureParams,
conditionsFeatureFraction.HotConditionFeatureFraction,
shapValuesInternal,
averageTreeApprox
);
}
if (!FuzzyEquals(1 + subtreeWeights[depth + 1][skipNodeIdx], 1 + 0.0)) {
double newZeroPathsFractionSkipNode = newZeroPathsFraction * coldCoefficient;
CalcObliviousInternalShapValuesForLeafRecursive(
forest,
binFeatureCombinationClass,
documentLeafIdx,
treeIdx,
depth + 1,
subtreeWeights,
skipNodeIdx,
featurePath,
newZeroPathsFractionSkipNode,
/*onePathFraction*/ 0,
combinationClass,
calcInternalValues,
fixedFeatureParams,
conditionsFeatureFraction.ColdConditionFeatureFraction,
shapValuesInternal,
averageTreeApprox
);
}
}
}
static void CalcNonObliviousInternalShapValuesForLeafRecursive(
const TModelTrees& forest,
const TVector<int>& binFeatureCombinationClass,
const TVector<bool>& mapNodeIdToIsGoRight,
size_t treeIdx,
int depth,
const TVector<TVector<double>>& subtreeWeights,
size_t nodeIdx,
const TVector<TFeaturePathElement>& oldFeaturePath,
double zeroPathsFraction,
double onePathsFraction,
int feature,
const TMaybe<TFixedFeatureParams>& fixedFeatureParams,
const double conditionFeatureFraction,
bool calcInternalValues,
TVector<TShapValue>* shapValuesInternal,
double averageTreeApprox
) {
if (FuzzyEquals(1.0 + conditionFeatureFraction, 1.0 + 0.0)) {
return;
}
TVector<TFeaturePathElement> featurePath;
ExtendFeaturePathIfFeatureNotFixed(
fixedFeatureParams,
oldFeaturePath,
zeroPathsFraction,
onePathsFraction,
feature,
&featurePath
);
const auto& node = forest.GetNonSymmetricStepNodes()[nodeIdx];
const size_t startOffset = forest.GetTreeStartOffsets()[treeIdx];
size_t goNodeIdx;
size_t skipNodeIdx;
if (mapNodeIdToIsGoRight[nodeIdx - startOffset]) {
goNodeIdx = nodeIdx + node.RightSubtreeDiff;
skipNodeIdx = nodeIdx + node.LeftSubtreeDiff;
} else {
goNodeIdx = nodeIdx + node.LeftSubtreeDiff;
skipNodeIdx = nodeIdx + node.RightSubtreeDiff;
}
// goNodeIdx == nodeIdx mean that nodeIdx is a terminal node for
// observed object. That's why we should update shap here.
// Similary for skipNodeIdx.
if (goNodeIdx == nodeIdx || skipNodeIdx == nodeIdx) {
UpdateShapByFeaturePath(
featurePath,
&forest.GetLeafValues()[0],
forest.GetNonSymmetricNodeIdToLeafId()[nodeIdx],
forest.GetDimensionsCount(),
/*isOblivious*/ false,
averageTreeApprox,
conditionFeatureFraction,
shapValuesInternal
);
}
double newZeroPathsFraction = 1.0;
double newOnePathsFraction = 1.0;
const int combinationClass = binFeatureCombinationClass[
forest.GetTreeSplits()[nodeIdx]
];
const auto sameFeatureElement = FindIf(
featurePath.begin(),
featurePath.end(),
[combinationClass](const TFeaturePathElement& element) {
return element.Feature == combinationClass;
}
);
if (sameFeatureElement != featurePath.end()) {
const size_t sameFeatureIndex = sameFeatureElement - featurePath.begin();
newZeroPathsFraction = featurePath[sameFeatureIndex].ZeroPathsFraction;
newOnePathsFraction = featurePath[sameFeatureIndex].OnePathsFraction;
featurePath = UnwindFeaturePath(featurePath, sameFeatureIndex);
}
const double hotCoefficient = goNodeIdx != nodeIdx ?
subtreeWeights[0][goNodeIdx - startOffset] / subtreeWeights[0][nodeIdx - startOffset] : -1.0;
const double coldCoefficient = skipNodeIdx != nodeIdx ?
subtreeWeights[0][skipNodeIdx - startOffset] / subtreeWeights[0][nodeIdx - startOffset] : -1.0;
TConditionsFeatureFraction conditionsFeatureFraction {
fixedFeatureParams,
combinationClass,
conditionFeatureFraction,
hotCoefficient,
coldCoefficient
};
if (goNodeIdx != nodeIdx && !FuzzyEquals(1 + subtreeWeights[0][goNodeIdx - startOffset], 1 + 0.0)) {
double newZeroPathsFractionGoNode = newZeroPathsFraction * hotCoefficient;
CalcNonObliviousInternalShapValuesForLeafRecursive(
forest,
binFeatureCombinationClass,
mapNodeIdToIsGoRight,
treeIdx,
depth + 1,
subtreeWeights,
goNodeIdx,
featurePath,
newZeroPathsFractionGoNode,
newOnePathsFraction,
combinationClass,
fixedFeatureParams,
conditionsFeatureFraction.HotConditionFeatureFraction,
calcInternalValues,
shapValuesInternal,
averageTreeApprox
);
}
if (skipNodeIdx != nodeIdx && !FuzzyEquals(1 + subtreeWeights[0][skipNodeIdx - startOffset], 1 + 0.0)) {
double newZeroPathsFractionSkipNode = newZeroPathsFraction * coldCoefficient;
CalcNonObliviousInternalShapValuesForLeafRecursive(
forest,
binFeatureCombinationClass,
mapNodeIdToIsGoRight,
treeIdx,
depth + 1,
subtreeWeights,
skipNodeIdx,
featurePath,
newZeroPathsFractionSkipNode,
/*onePathFraction*/ 0,
combinationClass,
fixedFeatureParams,
conditionsFeatureFraction.ColdConditionFeatureFraction,
calcInternalValues,
shapValuesInternal,
averageTreeApprox
);
}
}
static void UnpackInternalShaps(const TVector<TShapValue>& shapValuesInternal, const TVector<TVector<int>>& combinationClassFeatures, TVector<TShapValue>* shapValues) {
shapValues->clear();
if (shapValuesInternal.empty()) {
return;
}
const int approxDimension = shapValuesInternal[0].Value.ysize();
for (const auto & shapValueInternal: shapValuesInternal) {
const TVector<int> &flatFeatures = combinationClassFeatures[shapValueInternal.Feature];
for (int flatFeatureIdx : flatFeatures) {
const auto sameFeatureShapValue = FindIf(
shapValues->begin(),
shapValues->end(),
[flatFeatureIdx](const TShapValue &shapValue) {
return shapValue.Feature == flatFeatureIdx;
}
);
double coefficient = flatFeatures.size();
if (sameFeatureShapValue == shapValues->end()) {
shapValues->emplace_back(flatFeatureIdx, approxDimension);
for (int dimension = 0; dimension < approxDimension; ++dimension) {
double value = shapValueInternal.Value[dimension] / coefficient;
shapValues->back().Value[dimension] = value;
}
} else {
for (int dimension = 0; dimension < approxDimension; ++dimension) {
double addValue = shapValueInternal.Value[dimension] / coefficient;
sameFeatureShapValue->Value[dimension] += addValue;
}
}
}
}
}
static inline void CalcObliviousShapValuesForLeaf(
const TModelTrees& forest,
const TVector<int>& binFeatureCombinationClass,
const TVector<TVector<int>>& combinationClassFeatures,
size_t documentLeafIdx,
size_t treeIdx,
const TVector<TVector<double>>& subtreeWeights,
bool calcInternalValues,
const TMaybe<TFixedFeatureParams>& fixedFeatureParams,
TVector<TShapValue>* shapValues,
double averageTreeApprox
) {
shapValues->clear();
if (calcInternalValues) {
CalcObliviousInternalShapValuesForLeafRecursive(
forest,
binFeatureCombinationClass,
documentLeafIdx,
treeIdx,
/*depth*/ 0,
subtreeWeights,
/*nodeIdx*/ 0,
/*initialFeaturePath*/ {},
/*zeroPathFraction*/ 1,
/*onePathFraction*/ 1,
/*feature*/ -1,
calcInternalValues,
fixedFeatureParams,
/*conditionFeatureFraction*/ 1,
shapValues,
averageTreeApprox
);
} else {
TVector<TShapValue> shapValuesInternal;
CalcObliviousInternalShapValuesForLeafRecursive(
forest,
binFeatureCombinationClass,
documentLeafIdx,
treeIdx,
/*depth*/ 0,
subtreeWeights,
/*nodeIdx*/ 0,
/*initialFeaturePath*/ {},
/*zeroPathFraction*/ 1,
/*onePathFraction*/ 1,
/*feature*/ -1,
calcInternalValues,
fixedFeatureParams,
/*conditionFeatureFraction*/ 1,
&shapValuesInternal,
averageTreeApprox
);
UnpackInternalShaps(shapValuesInternal, combinationClassFeatures, shapValues);
}
}
static void CalcObliviousApproximateShapValuesForLeafImplementation(
const TModelTrees& forest,
const TVector<int>& binFeatureCombinationClass,
size_t documentLeafIdx,
size_t treeIdx,
const TVector<TVector<TVector<double>>>& subtreeValues,
TVector<TShapValue>* shapValues
) {
const size_t approxDimension = forest.GetDimensionsCount();
size_t treeSize = forest.GetTreeSizes()[treeIdx];
size_t nodeIdx = 0;
for (size_t depth = 0; depth < treeSize; ++depth) {
size_t remainingDepth = treeSize - depth - 1;
const bool isGoRight = (documentLeafIdx >> remainingDepth) & 1;
const size_t goNodeIdx = nodeIdx * 2 + isGoRight;
const int combinationClass = binFeatureCombinationClass[
forest.GetTreeSplits()[forest.GetTreeStartOffsets()[treeIdx] + remainingDepth]
];
const auto FeatureShapValue = FindIf(
shapValues->begin(),
shapValues->end(),
[combinationClass](const TShapValue &shapValue) {
return shapValue.Feature == combinationClass;
}
);
auto newFeatureShapValue = shapValues->end();
if (FeatureShapValue == shapValues->end()) {
shapValues->emplace_back(combinationClass, approxDimension);
newFeatureShapValue = shapValues->end() - 1;
} else {
newFeatureShapValue = FeatureShapValue;
}
for (size_t dimension = 0; dimension < approxDimension; ++dimension) {
newFeatureShapValue->Value[dimension] +=
subtreeValues[depth + 1][goNodeIdx][dimension]
- subtreeValues[depth][nodeIdx][dimension];
}
nodeIdx = goNodeIdx;
}
}
static inline void CalcObliviousApproximateShapValuesForLeaf(
const TModelTrees& forest,
const TVector<int>& binFeatureCombinationClass,
const TVector<TVector<int>>& combinationClassFeatures,
size_t documentLeafIdx,
size_t treeIdx,
const TVector<TVector<TVector<double>>>& subtreeValues,
bool calcInternalValues,
TVector<TShapValue>* shapValues
) {
shapValues->clear();
if (calcInternalValues) {
CalcObliviousApproximateShapValuesForLeafImplementation(
forest,
binFeatureCombinationClass,
documentLeafIdx,
treeIdx,
subtreeValues,
shapValues
);
} else {
TVector<TShapValue> shapValuesInternal;
CalcObliviousApproximateShapValuesForLeafImplementation(
forest,
binFeatureCombinationClass,
documentLeafIdx,
treeIdx,
subtreeValues,
&shapValuesInternal
);
UnpackInternalShaps(shapValuesInternal, combinationClassFeatures, shapValues);
}
}
static inline void CalcObliviousExactShapValuesForLeaf(
const TModelTrees& forest,
const TVector<int>& binFeatureCombinationClass,
const TVector<TVector<int>>& combinationClassFeatures,
size_t documentLeafIdx,
size_t treeIdx,
const TVector<TVector<double>>& subtreeWeights,
bool calcInternalValues,
TVector<TShapValue>* shapValues
) {
shapValues->clear();
if (calcInternalValues) {
CalcObliviousExactShapValuesForLeafImplementation(
forest,
binFeatureCombinationClass,
documentLeafIdx,
treeIdx,
subtreeWeights,
shapValues
);
} else {
TVector<TShapValue> shapValuesInternal;
CalcObliviousExactShapValuesForLeafImplementation(
forest,
binFeatureCombinationClass,
documentLeafIdx,
treeIdx,
subtreeWeights,
&shapValuesInternal
);
UnpackInternalShaps(shapValuesInternal, combinationClassFeatures, shapValues);
}
}
static inline void CalcNonObliviousShapValuesForLeaf(
const TModelTrees& forest,
const TVector<int>& binFeatureCombinationClass,
const TVector<TVector<int>>& combinationClassFeatures,
const TVector<bool>& mapNodeIdToIsGoRight,
size_t treeIdx,
const TVector<TVector<double>>& subtreeWeights,
bool calcInternalValues,
const TMaybe<TFixedFeatureParams>& fixedFeatureParams,
TVector<TShapValue>* shapValues,
double averageTreeApprox
) {
shapValues->clear();
if (calcInternalValues) {
CalcNonObliviousInternalShapValuesForLeafRecursive(
forest,
binFeatureCombinationClass,
mapNodeIdToIsGoRight,
treeIdx,
/*depth*/ 0,
subtreeWeights,
/*nodeIdx*/ forest.GetTreeStartOffsets()[treeIdx],
/*initialFeaturePath*/ {},
/*zeroPathFraction*/ 1,
/*onePathFraction*/ 1,
/*feature*/ -1,
fixedFeatureParams,
/*conditionFeatureFraction*/ 1.0,
calcInternalValues,
shapValues,
averageTreeApprox
);
} else {
TVector<TShapValue> shapValuesInternal;
CalcNonObliviousInternalShapValuesForLeafRecursive(
forest,
binFeatureCombinationClass,
mapNodeIdToIsGoRight,
treeIdx,
/*depth*/ 0,
subtreeWeights,
/*nodeIdx*/ forest.GetTreeStartOffsets()[treeIdx],
/*initialFeaturePath*/ {},
/*zeroPathFraction*/ 1,
/*onePathFraction*/ 1,
/*feature*/ -1,
fixedFeatureParams,
/*conditionFeatureFraction*/ 1.0,
calcInternalValues,
&shapValuesInternal,
averageTreeApprox
);
UnpackInternalShaps(shapValuesInternal, combinationClassFeatures, shapValues);
}
}
static void CalcNonObliviousApproximateShapValuesForLeafImplementation(
const TModelTrees& forest,
const TVector<int>& binFeatureCombinationClass,
const TVector<bool>& mapNodeIdToIsGoRight,
size_t treeIdx,
const TVector<TVector<TVector<double>>>& subtreeValues,
TVector<TShapValue>* shapValues
) {
const size_t approxDimension = forest.GetDimensionsCount();
const size_t startOffset = forest.GetTreeStartOffsets()[treeIdx];
size_t nodeIdx = startOffset;
size_t goNodeIdx;
auto& node = forest.GetNonSymmetricStepNodes()[nodeIdx];
if (mapNodeIdToIsGoRight[nodeIdx - startOffset]) {
goNodeIdx = nodeIdx + node.RightSubtreeDiff;
} else {
goNodeIdx = nodeIdx + node.LeftSubtreeDiff;
}
while (nodeIdx != goNodeIdx) {
const int combinationClass = binFeatureCombinationClass[
forest.GetTreeSplits()[nodeIdx]
];
const auto FeatureShapValue = FindIf(
shapValues->begin(),
shapValues->end(),
[combinationClass](const TShapValue &shapValue) {
return shapValue.Feature == combinationClass;
}
);
auto newFeatureShapValue = shapValues->end();
if (FeatureShapValue == shapValues->end()) {
shapValues->emplace_back(combinationClass, approxDimension);
newFeatureShapValue = shapValues->end() - 1;
} else {
newFeatureShapValue = FeatureShapValue;
}
for (size_t dimension = 0; dimension < approxDimension; ++dimension) {
newFeatureShapValue->Value[dimension] +=
subtreeValues[0][goNodeIdx - startOffset][dimension]
- subtreeValues[0][nodeIdx - startOffset][dimension];
}
nodeIdx = goNodeIdx;
auto& node = forest.GetNonSymmetricStepNodes()[nodeIdx];
if (mapNodeIdToIsGoRight[nodeIdx - startOffset]) {
goNodeIdx = nodeIdx + node.RightSubtreeDiff;
} else {
goNodeIdx = nodeIdx + node.LeftSubtreeDiff;
}
}
}
static inline void CalcNonObliviousApproximateShapValuesForLeaf(
const TModelTrees& forest,
const TVector<int>& binFeatureCombinationClass,
const TVector<TVector<int>>& combinationClassFeatures,
const TVector<bool>& mapNodeIdToIsGoRight,
size_t treeIdx,
const TVector<TVector<TVector<double>>>& subtreeValues,
bool calcInternalValues,
TVector<TShapValue>* shapValues
) {
shapValues->clear();
if (calcInternalValues) {
CalcNonObliviousApproximateShapValuesForLeafImplementation(
forest,
binFeatureCombinationClass,
mapNodeIdToIsGoRight,
treeIdx,
subtreeValues,
shapValues
);
} else {
TVector<TShapValue> shapValuesInternal;
CalcNonObliviousApproximateShapValuesForLeafImplementation(
forest,
binFeatureCombinationClass,
mapNodeIdToIsGoRight,
treeIdx,
subtreeValues,
&shapValuesInternal
);
UnpackInternalShaps(shapValuesInternal, combinationClassFeatures, shapValues);
}
}
static TVector<double> CalcMeanValueForTree(
const TModelTrees& forest,
const TVector<TVector<double>>& subtreeWeights,
size_t treeIdx
) {
const int approxDimension = forest.GetDimensionsCount();
TVector<double> meanValue(approxDimension, 0.0);
if (forest.IsOblivious()) {
auto firstLeafPtr = forest.GetFirstLeafPtrForTree(treeIdx);
const size_t maxDepth = forest.GetTreeSizes()[treeIdx];
for (size_t leafIdx = 0; leafIdx < (size_t(1) << maxDepth); ++leafIdx) {
for (int dimension = 0; dimension < approxDimension; ++dimension) {
meanValue[dimension] += firstLeafPtr[leafIdx * approxDimension + dimension]
* subtreeWeights[maxDepth][leafIdx];
}
}
} else {
const int totalNodesCount = forest.GetNonSymmetricNodeIdToLeafId().size();
const bool isLastTree = treeIdx == forest.GetTreeStartOffsets().size() - 1;
const size_t startOffset = forest.GetTreeStartOffsets()[treeIdx];
const size_t endOffset = isLastTree ? totalNodesCount : forest.GetTreeStartOffsets()[treeIdx + 1];
for (size_t nodeIdx = startOffset; nodeIdx < endOffset; ++nodeIdx) {
for (int dimension = 0; dimension < approxDimension; ++dimension) {
size_t leafIdx = forest.GetNonSymmetricNodeIdToLeafId()[nodeIdx];
if (leafIdx < forest.GetLeafValues().size()) {
meanValue[dimension] += forest.GetLeafValues()[leafIdx + dimension]
* forest.GetLeafWeights()[leafIdx / forest.GetDimensionsCount()];
}
}
}
}
for (int dimension = 0; dimension < approxDimension; ++dimension) {
meanValue[dimension] /= subtreeWeights[0][0];
}
return meanValue;
}
// 'reversed' mean every child will become parent and vice versa
static TVector<size_t> GetReversedSubtreeForNonObliviousTree(
const TModelTrees& forest,
int treeIdx
) {
const int totalNodesCount = forest.GetTreeSplits().size();
const bool isLastTree = static_cast<size_t>(treeIdx + 1) == forest.GetTreeStartOffsets().size();
const int startOffset = forest.GetTreeStartOffsets()[treeIdx];
const int endOffset = isLastTree ? totalNodesCount : forest.GetTreeStartOffsets()[treeIdx + 1];
const int treeSize = endOffset - startOffset;
TVector<size_t> reversedTree(treeSize, 0);
for (int nodeIdx = startOffset; nodeIdx < endOffset; ++nodeIdx) {
const int localIdx = nodeIdx - startOffset;
const size_t leftDiff = forest.GetNonSymmetricStepNodes()[nodeIdx].LeftSubtreeDiff;
const size_t rightDiff = forest.GetNonSymmetricStepNodes()[nodeIdx].RightSubtreeDiff;
if (leftDiff != 0) {
reversedTree[localIdx + leftDiff] = nodeIdx;
}
if (rightDiff != 0) {
reversedTree[localIdx + rightDiff] = nodeIdx;
}
}
return reversedTree;
}
// All calculations only for one docId
static TVector<bool> GetDocumentPathToLeafForNonObliviousBlock(
const TModelTrees& forest,
const size_t docIdx,
const size_t treeIdx,
const NCB::NModelEvaluation::TCPUEvaluatorQuantizedData& block
) {
const ui8* binFeatures = block.QuantizedData.data();
const size_t docCountInBlock = block.ObjectsCount;
const TRepackedBin* treeSplitsPtr = forest.GetRepackedBins().data();
const auto firstLeafOffsets = forest.GetFirstLeafOffsets();
const int totalNodesCount = forest.GetTreeSplits().size();
const bool isLastTree = static_cast<size_t>(treeIdx + 1) == forest.GetTreeStartOffsets().size();
const size_t endOffset = isLastTree ? totalNodesCount : forest.GetTreeStartOffsets()[treeIdx + 1];
TVector<bool> mapNodeIdToIsGoRight;
for (NCB::NModelEvaluation::TCalcerIndexType nodeIdx = forest.GetTreeStartOffsets()[treeIdx]; nodeIdx < endOffset; ++nodeIdx) {
const TRepackedBin split = treeSplitsPtr[nodeIdx];
ui8 featureValue = binFeatures[split.FeatureIndex * docCountInBlock + docIdx];
if (!forest.GetOneHotFeatures().empty()) {
featureValue ^= split.XorMask;
}
mapNodeIdToIsGoRight.push_back(featureValue >= split.SplitIdx);
}
return mapNodeIdToIsGoRight;
}
static TVector<bool> GetDocumentIsGoRightMapperForNodesInNonObliviousTree(
const TModelTrees& forest,
size_t treeIdx,
const NCB::NModelEvaluation::IQuantizedData* binarizedFeaturesForBlock,
size_t documentIdx
) {
const NModelEvaluation::TCPUEvaluatorQuantizedData* dataPtr = reinterpret_cast<const NModelEvaluation::TCPUEvaluatorQuantizedData*>(binarizedFeaturesForBlock);
Y_ASSERT(dataPtr);
auto blockId = documentIdx / NModelEvaluation::FORMULA_EVALUATION_BLOCK_SIZE;
auto subBlock = dataPtr->ExtractBlock(blockId);
return GetDocumentPathToLeafForNonObliviousBlock(
forest,
documentIdx % NModelEvaluation::FORMULA_EVALUATION_BLOCK_SIZE,
treeIdx,
subBlock
);
}
static TVector<TVector<TVector<double>>> CalcSubtreeValuesForTree(
const TModelTrees& forest,
const TVector<TVector<double>>& subtreeWeights,
const TVector<double>& leafWeights,
size_t treeIdx
) {
const size_t approxDimension = forest.GetDimensionsCount();
TVector<TVector<TVector<double>>> subtreeValues;
if (forest.IsOblivious()) {
auto firstLeafPtr = forest.GetFirstLeafPtrForTree(treeIdx);
const size_t treeDepth = forest.GetTreeSizes()[treeIdx];
subtreeValues.resize(treeDepth + 1);
size_t leafNum = size_t(1) << treeDepth;
subtreeValues[treeDepth].resize(leafNum);
for (size_t leafIdx = 0; leafIdx < leafNum; ++leafIdx) {
subtreeValues[treeDepth][leafIdx].resize(approxDimension, 0.0);
for (size_t dimension = 0; dimension < approxDimension; ++dimension) {
subtreeValues[treeDepth][leafIdx][dimension] = firstLeafPtr[leafIdx * approxDimension + dimension];
}
}
for (int depth = treeDepth - 1; depth >= 0; --depth) {
size_t subtreeNum = size_t(1) << depth;
subtreeValues[depth].resize(subtreeNum);
for (size_t subtreeIdx = 0; subtreeIdx < subtreeNum; ++subtreeIdx) {
subtreeValues[depth][subtreeIdx].resize(approxDimension, 0.0);
if (!FuzzyEquals(1 + subtreeWeights[depth][subtreeIdx], 1 + 0.0)) {
for (size_t dimension = 0; dimension < approxDimension; ++dimension) {
subtreeValues[depth][subtreeIdx][dimension] =
subtreeValues[depth + 1][subtreeIdx * 2][dimension]
* subtreeWeights[depth + 1][subtreeIdx * 2] +
subtreeValues[depth + 1][subtreeIdx * 2 + 1][dimension]
* subtreeWeights[depth + 1][subtreeIdx * 2 + 1];
subtreeValues[depth][subtreeIdx][dimension] /= subtreeWeights[depth][subtreeIdx];
}
}
}
}
} else {
const size_t startOffset = forest.GetTreeStartOffsets()[treeIdx];
auto firstLeafPtr = &forest.GetLeafValues()[0];
TVector<size_t> reversedTree = GetReversedSubtreeForNonObliviousTree(forest, treeIdx);
subtreeValues.resize(1);
subtreeValues[0].resize(reversedTree.size(), TVector<double>(approxDimension, 0.0));
if (reversedTree.size() == 1) {
size_t leafIdx = forest.GetNonSymmetricNodeIdToLeafId()[startOffset];
for (size_t dimension = 0; dimension < approxDimension; ++dimension) {
subtreeValues[0][0][dimension] = firstLeafPtr[leafIdx + dimension];
}
} else {
for (size_t localIdx = reversedTree.size() - 1; localIdx > 0; --localIdx) {
size_t leafIdx = forest.GetNonSymmetricNodeIdToLeafId()[startOffset + localIdx];
size_t leafWeightIdx = leafIdx / approxDimension;
if (leafWeightIdx < leafWeights.size()) {
if (!FuzzyEquals(1 + leafWeights[leafWeightIdx], 1 + 0.0)) {
for (size_t dimension = 0; dimension < approxDimension; ++dimension) {
subtreeValues[0][localIdx][dimension] +=
firstLeafPtr[leafIdx + dimension]
* leafWeights[leafWeightIdx];
}
}
}
if (!FuzzyEquals(1 + subtreeWeights[0][localIdx], 1 + 0.0)) {
for (size_t dimension = 0; dimension < approxDimension; ++dimension) {
subtreeValues[0][reversedTree[localIdx] - startOffset][dimension] +=
subtreeValues[0][localIdx][dimension];
}
}
}
for (int localIdx = reversedTree.size() - 1; localIdx >= 0; --localIdx) {
if (!FuzzyEquals(1 + subtreeWeights[0][localIdx], 1 + 0.0)) {
for (size_t dimension = 0; dimension < approxDimension; ++dimension) {
subtreeValues[0][localIdx][dimension] /= subtreeWeights[0][localIdx];
}
}
}
}
}
return subtreeValues;
}
static TVector<TVector<double>> CalcSubtreeWeightsForTree(
const TModelTrees& forest,
const TVector<double>& leafWeights,
int treeIdx
) {
TVector<TVector<double>> subtreeWeights;
if (forest.IsOblivious()) {
const int treeDepth = forest.GetTreeSizes()[treeIdx];
subtreeWeights.resize(treeDepth + 1);
subtreeWeights[treeDepth].resize(size_t(1) << treeDepth);
const int weightOffset = forest.GetFirstLeafOffsets()[treeIdx] / forest.GetDimensionsCount();
for (size_t nodeIdx = 0; nodeIdx < size_t(1) << treeDepth; ++nodeIdx) {
subtreeWeights[treeDepth][nodeIdx] = leafWeights[weightOffset + nodeIdx];
}
for (int depth = treeDepth - 1; depth >= 0; --depth) {
const size_t nodeCount = size_t(1) << depth;
subtreeWeights[depth].resize(nodeCount);
for (size_t nodeIdx = 0; nodeIdx < nodeCount; ++nodeIdx) {
subtreeWeights[depth][nodeIdx] = subtreeWeights[depth + 1][nodeIdx * 2] + subtreeWeights[depth + 1][nodeIdx * 2 + 1];
}
}
} else {
const int startOffset = forest.GetTreeStartOffsets()[treeIdx];
TVector<size_t> reversedTree = GetReversedSubtreeForNonObliviousTree(forest, treeIdx);
subtreeWeights.resize(1); // with respect to NonSymmetric format of TObliviousTree
subtreeWeights[0].resize(reversedTree.size(), 0);
if (reversedTree.size() == 1) {
subtreeWeights[0][0] = leafWeights[forest.GetNonSymmetricNodeIdToLeafId()[startOffset] / forest.GetDimensionsCount()];
} else {
for (size_t localIdx = reversedTree.size() - 1; localIdx > 0; --localIdx) {
size_t leafIdx = forest.GetNonSymmetricNodeIdToLeafId()[startOffset + localIdx] / forest.GetDimensionsCount();
if (leafIdx < leafWeights.size()) {
subtreeWeights[0][localIdx] += leafWeights[leafIdx];
}
subtreeWeights[0][reversedTree[localIdx] - startOffset] += subtreeWeights[0][localIdx];
}
}
}
return subtreeWeights;
}
static void MapBinFeaturesToClasses(
const TModelTrees& forest,
TVector<int>* binFeatureCombinationClass,
TVector<TVector<int>>* combinationClassFeatures
) {
TConstArrayRef<TFloatFeature> floatFeatures = forest.GetFloatFeatures();
TConstArrayRef<TCatFeature> catFeatures = forest.GetCatFeatures();
const NCB::TFeaturesLayout layout(
TVector<TFloatFeature>(floatFeatures.begin(), floatFeatures.end()),
TVector<TCatFeature>(catFeatures.begin(), catFeatures.end()));
TVector<TVector<int>> featuresCombinations;
TVector<size_t> featureBucketSizes;
for (const TFloatFeature& floatFeature : forest.GetFloatFeatures()) {
if (!floatFeature.UsedInModel()) {
continue;
}
featuresCombinations.emplace_back();
featuresCombinations.back() = { floatFeature.Position.FlatIndex };
featureBucketSizes.push_back(floatFeature.Borders.size());
}
for (const TOneHotFeature& oneHotFeature: forest.GetOneHotFeatures()) {
featuresCombinations.emplace_back();
featuresCombinations.back() = {
(int)layout.GetExternalFeatureIdx(oneHotFeature.CatFeatureIndex,
EFeatureType::Categorical)
};
featureBucketSizes.push_back(oneHotFeature.Values.size());
}
for (const TCtrFeature& ctrFeature : forest.GetCtrFeatures()) {
const TFeatureCombination& combination = ctrFeature.Ctr.Base.Projection;
featuresCombinations.emplace_back();
for (int catFeatureIdx : combination.CatFeatures) {
featuresCombinations.back().push_back(
layout.GetExternalFeatureIdx(catFeatureIdx, EFeatureType::Categorical));
}
featureBucketSizes.push_back(ctrFeature.Borders.size());
}
TVector<size_t> featureFirstBinBucket(featureBucketSizes.size(), 0);
for (size_t i = 1; i < featureBucketSizes.size(); ++i) {
featureFirstBinBucket[i] = featureFirstBinBucket[i - 1] + featureBucketSizes[i - 1];
}
TVector<int> sortedBinFeatures(featuresCombinations.size());
Iota(sortedBinFeatures.begin(), sortedBinFeatures.end(), 0);
Sort(
sortedBinFeatures.begin(),
sortedBinFeatures.end(),
[featuresCombinations](int feature1, int feature2) {
return featuresCombinations[feature1] < featuresCombinations[feature2];
}
);
*binFeatureCombinationClass = TVector<int>(forest.GetBinaryFeaturesFullCount());
*combinationClassFeatures = TVector<TVector<int>>();
int equivalenceClassesCount = 0;
for (ui32 featureIdx = 0; featureIdx < featuresCombinations.size(); ++featureIdx) {
int currentFeature = sortedBinFeatures[featureIdx];
int previousFeature = featureIdx == 0 ? -1 : sortedBinFeatures[featureIdx - 1];
if (featureIdx == 0 || featuresCombinations[currentFeature] != featuresCombinations[previousFeature]) {
combinationClassFeatures->push_back(featuresCombinations[currentFeature]);
++equivalenceClassesCount;
}
for (size_t binBucketId = featureFirstBinBucket[currentFeature];
binBucketId < featureFirstBinBucket[currentFeature] + featureBucketSizes[currentFeature];
++binBucketId)
{
(*binFeatureCombinationClass)[binBucketId] = equivalenceClassesCount - 1;
}
}
}
void CalcShapValuesForDocumentMulti(
const TFullModel& model,
const TShapPreparedTrees& preparedTrees,
const NCB::NModelEvaluation::IQuantizedData* binarizedFeaturesForBlock,
const TMaybe<TFixedFeatureParams>& fixedFeatureParams,
int featuresCount,
TConstArrayRef<NModelEvaluation::TCalcerIndexType> docIndices,
size_t documentIdxInBlock,
TVector<TVector<double>>* shapValues,
ECalcTypeShapValues calcType
) {
const int approxDimension = model.GetDimensionsCount();
shapValues->assign(approxDimension, TVector<double>(featuresCount + 1, 0.0));
const size_t treeCount = model.GetTreeCount();
for (size_t treeIdx = 0; treeIdx < treeCount; ++treeIdx) {
if (preparedTrees.CalcShapValuesByLeafForAllTrees && model.IsOblivious()) {
Y_ASSERT(docIndices[treeIdx] < preparedTrees.ShapValuesByLeafForAllTrees[treeIdx].size());
for (const TShapValue& shapValue : preparedTrees.ShapValuesByLeafForAllTrees[treeIdx][docIndices[treeIdx]]) {
for (int dimension = 0; dimension < approxDimension; ++dimension) {
(*shapValues)[dimension][shapValue.Feature] += shapValue.Value[dimension];
}
}
} else {
TVector<TShapValue> shapValuesByLeaf;
switch (calcType) {
case ECalcTypeShapValues::Approximate:
if (model.IsOblivious()) {
CalcObliviousApproximateShapValuesForLeaf(
*model.ModelTrees.Get(),
preparedTrees.BinFeatureCombinationClass,
preparedTrees.CombinationClassFeatures,
docIndices[treeIdx],
treeIdx,
preparedTrees.SubtreeValuesForAllTrees[treeIdx],
preparedTrees.CalcInternalValues,
&shapValuesByLeaf
);
} else {
TVector<bool> mapNodeIdToIsGoRight = GetDocumentIsGoRightMapperForNodesInNonObliviousTree(
*model.ModelTrees.Get(),
treeIdx,
binarizedFeaturesForBlock,
documentIdxInBlock
);
CalcNonObliviousApproximateShapValuesForLeaf(
*model.ModelTrees.Get(),
preparedTrees.BinFeatureCombinationClass,
preparedTrees.CombinationClassFeatures,
mapNodeIdToIsGoRight,
treeIdx,
preparedTrees.SubtreeValuesForAllTrees[treeIdx],
preparedTrees.CalcInternalValues,
&shapValuesByLeaf
);
}
break;
case ECalcTypeShapValues::Regular:
if (model.IsOblivious()) {
CalcObliviousShapValuesForLeaf(
*model.ModelTrees.Get(),
preparedTrees.BinFeatureCombinationClass,
preparedTrees.CombinationClassFeatures,
docIndices[treeIdx],
treeIdx,
preparedTrees.SubtreeWeightsForAllTrees[treeIdx],
preparedTrees.CalcInternalValues,
fixedFeatureParams,
&shapValuesByLeaf,
preparedTrees.AverageApproxByTree[treeIdx]
);
} else {
TVector<bool> mapNodeIdToIsGoRight = GetDocumentIsGoRightMapperForNodesInNonObliviousTree(
*model.ModelTrees.Get(),
treeIdx,
binarizedFeaturesForBlock,
documentIdxInBlock
);
CalcNonObliviousShapValuesForLeaf(
*model.ModelTrees.Get(),
preparedTrees.BinFeatureCombinationClass,
preparedTrees.CombinationClassFeatures,
mapNodeIdToIsGoRight,
treeIdx,
preparedTrees.SubtreeWeightsForAllTrees[treeIdx],
preparedTrees.CalcInternalValues,
fixedFeatureParams,
&shapValuesByLeaf,
preparedTrees.AverageApproxByTree[treeIdx]
);
}
break;
case ECalcTypeShapValues::Exact:
CB_ENSURE(model.IsOblivious(), "'Exact' calculation type is supported only for oblivious trees.");
CalcObliviousExactShapValuesForLeaf(
*model.ModelTrees.Get(),
preparedTrees.BinFeatureCombinationClass,
preparedTrees.CombinationClassFeatures,
docIndices[treeIdx],
treeIdx,
preparedTrees.SubtreeWeightsForAllTrees[treeIdx],
preparedTrees.CalcInternalValues,
&shapValuesByLeaf
);
break;
}
for (const TShapValue& shapValue : shapValuesByLeaf) {
for (int dimension = 0; dimension < approxDimension; ++dimension) {
(*shapValues)[dimension][shapValue.Feature] += shapValue.Value[dimension];
}
}
}
for (int dimension = 0; dimension < approxDimension; ++dimension) {
(*shapValues)[dimension][featuresCount] +=
preparedTrees.MeanValuesForAllTrees[treeIdx][dimension];
}
}
if (approxDimension == 1) {
(*shapValues)[0][featuresCount] += model.GetScaleAndBias().Bias;
}
}
void CalcShapValuesForDocumentMulti(
const TFullModel& model,
const TShapPreparedTrees& preparedTrees,
const NCB::NModelEvaluation::IQuantizedData* binarizedFeaturesForBlock,
int featuresCount,
TConstArrayRef<NModelEvaluation::TCalcerIndexType> docIndices,
size_t documentIdxInBlock,
TVector<TVector<double>>* shapValues,
ECalcTypeShapValues calcType
) {
CalcShapValuesForDocumentMulti(
model,
preparedTrees,
binarizedFeaturesForBlock,
/*fixedFeatureParams*/ Nothing(),
featuresCount,
docIndices,
documentIdxInBlock,
shapValues,
calcType
);
}
static void CalcShapValuesForDocumentBlockMulti(
const TFullModel& model,
const IFeaturesBlockIterator& featuresBlockIterator,
int flatFeatureCount,
const TShapPreparedTrees& preparedTrees,
const TMaybe<TFixedFeatureParams>& fixedFeatureParams,
size_t start,
size_t end,
NPar::TLocalExecutor* localExecutor,
TVector<TVector<TVector<double>>>* shapValuesForAllDocuments,
ECalcTypeShapValues calcType
) {
CheckNonZeroApproxForZeroWeightLeaf(model);
const size_t documentCount = end - start;
auto binarizedFeaturesForBlock = MakeQuantizedFeaturesForEvaluator(model, featuresBlockIterator, start, end);
TVector<NModelEvaluation::TCalcerIndexType> indices(binarizedFeaturesForBlock->GetObjectsCount() * model.GetTreeCount());
model.GetCurrentEvaluator()->CalcLeafIndexes(binarizedFeaturesForBlock.Get(), 0, model.GetTreeCount(), indices);
const int oldShapValuesSize = shapValuesForAllDocuments->size();
shapValuesForAllDocuments->resize(oldShapValuesSize + end - start);
NPar::TLocalExecutor::TExecRangeParams blockParams(0, documentCount);
localExecutor->ExecRange([&] (size_t documentIdxInBlock) {
TVector<TVector<double>>& shapValues = (*shapValuesForAllDocuments)[oldShapValuesSize + documentIdxInBlock];
CalcShapValuesForDocumentMulti(
model,
preparedTrees,
binarizedFeaturesForBlock.Get(),
fixedFeatureParams,
flatFeatureCount,
MakeArrayRef(indices.data() + documentIdxInBlock * model.GetTreeCount(), model.GetTreeCount()),
documentIdxInBlock,
&shapValues,
calcType
);
}, blockParams, NPar::TLocalExecutor::WAIT_COMPLETE);
}
static double CalcAverageApprox(const TVector<double>& averageApproxByClass) {
double result = 0;
for (double value : averageApproxByClass) {
result += value;
}
return result / averageApproxByClass.size();
}
static void CalcShapValuesByLeafForTreeBlock(
const TModelTrees& forest,
int start,
int end,
bool calcInternalValues,
const TMaybe<TFixedFeatureParams>& fixedFeatureParams,
NPar::TLocalExecutor* localExecutor,
TShapPreparedTrees* preparedTrees,
ECalcTypeShapValues calcType
) {
const auto& binFeatureCombinationClass = preparedTrees->BinFeatureCombinationClass;
const auto& combinationClassFeatures = preparedTrees->CombinationClassFeatures;
NPar::TLocalExecutor::TExecRangeParams blockParams(start, end);
localExecutor->ExecRange([&] (size_t treeIdx) {
const bool isOblivious = forest.GetNonSymmetricStepNodes().empty() && forest.GetNonSymmetricNodeIdToLeafId().empty();
if (preparedTrees->CalcShapValuesByLeafForAllTrees && isOblivious) {
const size_t leafCount = (size_t(1) << forest.GetTreeSizes()[treeIdx]);
TVector<TVector<TShapValue>>& shapValuesByLeaf = preparedTrees->ShapValuesByLeafForAllTrees[treeIdx];
shapValuesByLeaf.resize(leafCount);
for (size_t leafIdx = 0; leafIdx < leafCount; ++leafIdx) {
switch (calcType) {
case ECalcTypeShapValues::Approximate:
CalcObliviousApproximateShapValuesForLeaf(
forest,
binFeatureCombinationClass,
combinationClassFeatures,
leafIdx,
treeIdx,
preparedTrees->SubtreeValuesForAllTrees[treeIdx],
calcInternalValues,
&shapValuesByLeaf[leafIdx]
);
break;
case ECalcTypeShapValues::Regular:
CalcObliviousShapValuesForLeaf(
forest,
binFeatureCombinationClass,
combinationClassFeatures,
leafIdx,
treeIdx,
preparedTrees->SubtreeWeightsForAllTrees[treeIdx],
calcInternalValues,
fixedFeatureParams,
&shapValuesByLeaf[leafIdx],
preparedTrees->AverageApproxByTree[treeIdx]
);
break;
case ECalcTypeShapValues::Exact:
CalcObliviousExactShapValuesForLeaf(
forest,
binFeatureCombinationClass,
combinationClassFeatures,
leafIdx,
treeIdx,
preparedTrees->SubtreeWeightsForAllTrees[treeIdx],
calcInternalValues,
&shapValuesByLeaf[leafIdx]
);
break;
}
}
}
}, blockParams, NPar::TLocalExecutor::WAIT_COMPLETE);
}
bool IsPrepareTreesCalcShapValues(
const TFullModel& model,
const TDataProvider* dataset,
EPreCalcShapValues mode
) {
switch (mode) {
case EPreCalcShapValues::UsePreCalc:
CB_ENSURE(model.IsOblivious(), "UsePreCalc mode can be used only for symmetric trees.");
return true;
case EPreCalcShapValues::NoPreCalc:
return false;
case EPreCalcShapValues::Auto:
if (dataset==nullptr) {
return true;
} else {
if (!model.IsOblivious()) {
return false;
}
const size_t treeCount = model.GetTreeCount();
const TModelTrees& forest = *model.ModelTrees;
double treesAverageLeafCount = forest.GetLeafValues().size() / treeCount;
return treesAverageLeafCount < dataset->ObjectsGrouping->GetObjectCount();
}
}
Y_UNREACHABLE();
}
static bool AreApproxesZeroForLastClass(
const TModelTrees& forest,
size_t treeIdx) {
const int approxDimension = forest.GetDimensionsCount();
const double Eps = 1e-12;
if (forest.IsOblivious()) {
auto firstLeafPtr = forest.GetFirstLeafPtrForTree(treeIdx);
const size_t maxDepth = forest.GetTreeSizes()[treeIdx];
for (size_t leafIdx = 0; leafIdx < (size_t(1) << maxDepth); ++leafIdx) {
if (fabs(firstLeafPtr[leafIdx * approxDimension + approxDimension - 1]) > Eps){
return false;
}
}
} else {
const int totalNodesCount = forest.GetNonSymmetricNodeIdToLeafId().size();
const bool isLastTree = treeIdx == forest.GetTreeStartOffsets().size() - 1;
const size_t startOffset = forest.GetTreeStartOffsets()[treeIdx];
const size_t endOffset = isLastTree ? totalNodesCount : forest.GetTreeStartOffsets()[treeIdx + 1];
for (size_t nodeIdx = startOffset; nodeIdx < endOffset; ++nodeIdx) {
size_t leafIdx = forest.GetNonSymmetricNodeIdToLeafId()[nodeIdx];
if (leafIdx < forest.GetLeafValues().size() && fabs(forest.GetLeafValues()[leafIdx + approxDimension]) > Eps) {
return false;
}
}
}
return true;
}
static bool IsMultiClass(const TFullModel& model) {
return model.ModelTrees->GetDimensionsCount() > 1;
}
TMaybe<ELossFunction> TryGuessModelMultiClassLoss(const TFullModel& model) {
TString lossFunctionName = model.GetLossFunctionName();
if (lossFunctionName) {
return FromString<ELossFunction>(lossFunctionName);
} else {
const auto& forest = *model.ModelTrees;
bool approxesAreZeroForLastClass = true;
for (size_t treeIdx = 0; treeIdx < model.GetTreeCount(); ++treeIdx) {
approxesAreZeroForLastClass &= AreApproxesZeroForLastClass(forest, treeIdx);
}
return approxesAreZeroForLastClass ? TMaybe<ELossFunction>(ELossFunction::MultiClass) : Nothing();
}
}
static void CalcTreeStats(
const TModelTrees& forest,
const TVector<double>& leafWeights,
bool isMultiClass,
TShapPreparedTrees* preparedTrees,
ECalcTypeShapValues calcType
) {
const size_t treeCount = forest.GetTreeCount();
for (size_t treeIdx = 0; treeIdx < treeCount; ++treeIdx) {
preparedTrees->SubtreeWeightsForAllTrees[treeIdx] = CalcSubtreeWeightsForTree(forest, leafWeights, treeIdx);
preparedTrees->MeanValuesForAllTrees[treeIdx]
= CalcMeanValueForTree(forest, preparedTrees->SubtreeWeightsForAllTrees[treeIdx], treeIdx);
if (calcType == ECalcTypeShapValues::Approximate) {
preparedTrees->SubtreeValuesForAllTrees[treeIdx] =
CalcSubtreeValuesForTree(forest, preparedTrees->SubtreeWeightsForAllTrees[treeIdx],
leafWeights, treeIdx);
}
preparedTrees->AverageApproxByTree[treeIdx] = isMultiClass ? CalcAverageApprox(preparedTrees->MeanValuesForAllTrees[treeIdx]) : 0;
}
}
static void InitPreparedTrees(
const TFullModel& model,
const TDataProvider* dataset, // can be nullptr if model has LeafWeights
EPreCalcShapValues mode,
bool calcInternalValues,
NPar::TLocalExecutor* localExecutor,
TShapPreparedTrees* preparedTrees,
ECalcTypeShapValues calcType
) {
const size_t treeCount = model.GetTreeCount();
// use only if model.ModelTrees->LeafWeights is empty
TVector<double> leafWeights;
if (model.ModelTrees->GetLeafWeights().empty()) {
CB_ENSURE(
dataset,
"PrepareTrees requires either non-empty LeafWeights in model or provided dataset"
);
CB_ENSURE(dataset->ObjectsGrouping->GetObjectCount() != 0, "To calculate shap values, dataset must contain objects.");
CB_ENSURE(dataset->MetaInfo.GetFeatureCount() > 0, "To calculate shap values, dataset must contain features.");
leafWeights = CollectLeavesStatistics(*dataset, model, localExecutor);
}
preparedTrees->CalcShapValuesByLeafForAllTrees = IsPrepareTreesCalcShapValues(model, dataset, mode);
if (!preparedTrees->CalcShapValuesByLeafForAllTrees) {
TVector<double> modelLeafWeights(model.ModelTrees->GetLeafWeights().begin(), model.ModelTrees->GetLeafWeights().end());
preparedTrees->LeafWeightsForAllTrees
= modelLeafWeights.empty() ? leafWeights : modelLeafWeights;
}
preparedTrees->ShapValuesByLeafForAllTrees.resize(treeCount);
preparedTrees->SubtreeWeightsForAllTrees.resize(treeCount);
preparedTrees->MeanValuesForAllTrees.resize(treeCount);
if (calcType == ECalcTypeShapValues::Approximate) {
preparedTrees->SubtreeValuesForAllTrees.resize(treeCount);
}
preparedTrees->AverageApproxByTree.resize(treeCount);
preparedTrees->CalcInternalValues = calcInternalValues;
const TModelTrees& forest = *model.ModelTrees;
MapBinFeaturesToClasses(
forest,
&preparedTrees->BinFeatureCombinationClass,
&preparedTrees->CombinationClassFeatures
);
}
static void InitLeafWeights(
const TFullModel& model,
const TDataProvider* dataset,
NPar::TLocalExecutor* localExecutor,
TVector<double>* leafWeights
) {
const auto& leafWeightsOfModels = model.ModelTrees->GetLeafWeights();
if (leafWeightsOfModels.empty()) {
CB_ENSURE(
dataset,
"To calculate shap values, either a model with leaf weights, or a dataset are required."
);
CB_ENSURE(dataset->ObjectsGrouping->GetObjectCount() != 0, "To calculate shap values, dataset must contain objects.");
CB_ENSURE(dataset->MetaInfo.GetFeatureCount() > 0, "To calculate shap values, dataset must contain features.");
*leafWeights = CollectLeavesStatistics(*dataset, model, localExecutor);
} else {
leafWeights->assign(leafWeightsOfModels.begin(), leafWeightsOfModels.end());
}
}
static inline bool IsMultiClassification(const TFullModel& model) {
ELossFunction modelLoss = ELossFunction::RMSE;
if (IsMultiClass(model)) {
TMaybe<ELossFunction> loss = TryGuessModelMultiClassLoss(model);
if (loss) {
modelLoss = *loss.Get();
} else {
CATBOOST_WARNING_LOG << "There is no loss_function parameter in the model, so it is considered as MultiClass" << Endl;
modelLoss = ELossFunction::MultiClass;
}
}
return (modelLoss == ELossFunction::MultiClass);
}
void CalcShapValuesByLeaf(
const TFullModel& model,
const TMaybe<TFixedFeatureParams>& fixedFeatureParams,
int logPeriod,
bool calcInternalValues,
NPar::TLocalExecutor* localExecutor,
TShapPreparedTrees* preparedTrees,
ECalcTypeShapValues calcType
) {
const size_t treeCount = model.GetTreeCount();
const size_t treeBlockSize = CB_THREAD_LIMIT; // least necessary for threading
TProfileInfo processTreesProfile(treeCount);
TImportanceLogger treesLogger(treeCount, "trees processed", "Processing trees...", logPeriod);
for (size_t start = 0; start < treeCount; start += treeBlockSize) {
size_t end = Min(start + treeBlockSize, treeCount);
processTreesProfile.StartIterationBlock();
CalcShapValuesByLeafForTreeBlock(
*model.ModelTrees,
start,
end,
calcInternalValues,
fixedFeatureParams,
localExecutor,
preparedTrees,
calcType
);
processTreesProfile.FinishIterationBlock(end - start);
auto profileResults = processTreesProfile.GetProfileResults();
treesLogger.Log(profileResults);
}
}
TShapPreparedTrees PrepareTrees(
const TFullModel& model,
const TDataProvider* dataset, // can be nullptr if model has LeafWeights
EPreCalcShapValues mode,
NPar::TLocalExecutor* localExecutor,
bool calcInternalValues,
ECalcTypeShapValues calcType
) {
TVector<double> leafWeights;
InitLeafWeights(model, dataset, localExecutor, &leafWeights);
TShapPreparedTrees preparedTrees;
InitPreparedTrees(model, dataset, mode, calcInternalValues, localExecutor, &preparedTrees, calcType);
const bool isMultiClass = IsMultiClassification(model);
CalcTreeStats(*model.ModelTrees, leafWeights, isMultiClass, &preparedTrees, calcType);
return preparedTrees;
}
TShapPreparedTrees PrepareTrees(
const TFullModel& model,
NPar::TLocalExecutor* localExecutor,
ECalcTypeShapValues calcType
) {
CB_ENSURE(
!model.ModelTrees->GetLeafWeights().empty(),
"Model must have leaf weights or sample pool must be provided"
);
TShapPreparedTrees preparedTrees = PrepareTrees(
model,
nullptr,
EPreCalcShapValues::Auto, localExecutor,
/*calcInternalValues*/ false,
calcType
);
CalcShapValuesByLeaf(
model,
/*fixedFeatureParams*/ Nothing(),
/*logPeriod*/ 0,
preparedTrees.CalcInternalValues,
localExecutor,
&preparedTrees,
calcType
);
return preparedTrees;
}
void CalcShapValuesInternalForFeature(
const TShapPreparedTrees& preparedTrees,
const TFullModel& model,
int /*logPeriod*/,
ui32 start,
ui32 end,
ui32 featuresCount,
const NCB::TObjectsDataProvider& objectsData,
TVector<TVector<TVector<double>>>* shapValues, // [docIdx][featureIdx][dim]
NPar::TLocalExecutor* localExecutor,
ECalcTypeShapValues calcType
) {
CB_ENSURE(start <= end && end <= objectsData.GetObjectCount());
const TModelTrees& forest = *model.ModelTrees;
shapValues->clear();
const ui32 documentCount = end - start;
shapValues->resize(documentCount);
THolder<IFeaturesBlockIterator> featuresBlockIterator
= CreateFeaturesBlockIterator(model, objectsData, start, end);
const ui32 documentBlockSize = NModelEvaluation::FORMULA_EVALUATION_BLOCK_SIZE;
TVector<NModelEvaluation::TCalcerIndexType> indices(documentBlockSize * forest.GetTreeCount());
for (ui32 startIdx = 0; startIdx < documentCount; startIdx += documentBlockSize) {
NPar::TLocalExecutor::TExecRangeParams blockParams(startIdx, startIdx + Min(documentBlockSize, documentCount - startIdx));
featuresBlockIterator->NextBlock(blockParams.LastId - blockParams.FirstId);
auto binarizedFeaturesForBlock = MakeQuantizedFeaturesForEvaluator(model, *featuresBlockIterator, blockParams.FirstId, blockParams.LastId);
model.GetCurrentEvaluator()->CalcLeafIndexes(
binarizedFeaturesForBlock.Get(),
0, forest.GetTreeCount(),
MakeArrayRef(indices.data(), binarizedFeaturesForBlock->GetObjectsCount() * forest.GetTreeCount())
);
localExecutor->ExecRange([&](ui32 documentIdx) {
TVector<TVector<double>> &docShapValues = (*shapValues)[documentIdx];
docShapValues.assign(featuresCount, TVector<double>(forest.GetDimensionsCount() + 1, 0.0));
auto docIndices = MakeArrayRef(indices.data() + forest.GetTreeCount() * (documentIdx - startIdx), forest.GetTreeCount());
for (size_t treeIdx = 0; treeIdx < forest.GetTreeCount(); ++treeIdx) {
if (preparedTrees.CalcShapValuesByLeafForAllTrees && model.IsOblivious()) {
for (const TShapValue& shapValue : preparedTrees.ShapValuesByLeafForAllTrees[treeIdx][docIndices[treeIdx]]) {
for (int dimension = 0; dimension < (int)forest.GetDimensionsCount(); ++dimension) {
docShapValues[shapValue.Feature][dimension] += shapValue.Value[dimension];
}
}
} else {
TVector<TShapValue> shapValuesByLeaf;
switch (calcType) {
case ECalcTypeShapValues::Approximate:
if (model.IsOblivious()) {
CalcObliviousApproximateShapValuesForLeaf(
forest,
preparedTrees.BinFeatureCombinationClass,
preparedTrees.CombinationClassFeatures,
docIndices[treeIdx],
treeIdx,
preparedTrees.SubtreeValuesForAllTrees[treeIdx],
preparedTrees.CalcInternalValues,
&shapValuesByLeaf
);
} else {
const TVector<bool> docPathIndexes = GetDocumentIsGoRightMapperForNodesInNonObliviousTree(
*model.ModelTrees.Get(),
treeIdx,
binarizedFeaturesForBlock.Get(),
documentIdx - startIdx
);
CalcNonObliviousApproximateShapValuesForLeaf(
forest,
preparedTrees.BinFeatureCombinationClass,
preparedTrees.CombinationClassFeatures,
docPathIndexes,
treeIdx,
preparedTrees.SubtreeValuesForAllTrees[treeIdx],
preparedTrees.CalcInternalValues,
&shapValuesByLeaf
);
}
break;
case ECalcTypeShapValues::Regular:
if (model.IsOblivious()) {
CalcObliviousShapValuesForLeaf(
forest,
preparedTrees.BinFeatureCombinationClass,
preparedTrees.CombinationClassFeatures,
docIndices[treeIdx],
treeIdx,
preparedTrees.SubtreeWeightsForAllTrees[treeIdx],
preparedTrees.CalcInternalValues,
/*fixedFeatureParams*/ Nothing(),
&shapValuesByLeaf,
preparedTrees.AverageApproxByTree[treeIdx]
);
} else {
const TVector<bool> docPathIndexes = GetDocumentIsGoRightMapperForNodesInNonObliviousTree(
*model.ModelTrees.Get(),
treeIdx,
binarizedFeaturesForBlock.Get(),
documentIdx - startIdx
);
CalcNonObliviousShapValuesForLeaf(
forest,
preparedTrees.BinFeatureCombinationClass,
preparedTrees.CombinationClassFeatures,
docPathIndexes,
treeIdx,
preparedTrees.SubtreeWeightsForAllTrees[treeIdx],
preparedTrees.CalcInternalValues,
/*fixedFeatureParams*/ Nothing(),
&shapValuesByLeaf,
preparedTrees.AverageApproxByTree[treeIdx]
);
}
break;
case ECalcTypeShapValues::Exact:
CB_ENSURE(model.IsOblivious(), "'Exact' calculation type is supported only for oblivious trees.");
CalcObliviousExactShapValuesForLeaf(
forest,
preparedTrees.BinFeatureCombinationClass,
preparedTrees.CombinationClassFeatures,
docIndices[treeIdx],
treeIdx,
preparedTrees.SubtreeWeightsForAllTrees[treeIdx],
preparedTrees.CalcInternalValues,
&shapValuesByLeaf
);
break;
}
for (const TShapValue& shapValue : shapValuesByLeaf) {
for (int dimension = 0; dimension < (int)forest.GetDimensionsCount(); ++dimension) {
docShapValues[shapValue.Feature][dimension] += shapValue.Value[dimension];
}
}
}
}
}, blockParams, NPar::TLocalExecutor::WAIT_COMPLETE);
}
}
static TVector<TVector<TVector<double>>> CalcShapValuesWithPreparedTrees(
const TFullModel& model,
const TDataProvider& dataset,
const TMaybe<TFixedFeatureParams>& fixedFeatureParams,
int logPeriod,
TShapPreparedTrees* preparedTrees,
NPar::TLocalExecutor* localExecutor,
ECalcTypeShapValues calcType
) {
const size_t documentCount = dataset.ObjectsGrouping->GetObjectCount();
const size_t documentBlockSize = CB_THREAD_LIMIT; // least necessary for threading
const int flatFeatureCount = SafeIntegerCast<int>(dataset.MetaInfo.GetFeatureCount());
TImportanceLogger documentsLogger(documentCount, "documents processed", "Processing documents...", logPeriod);
TVector<TVector<TVector<double>>> shapValues;
shapValues.reserve(documentCount);
TProfileInfo processDocumentsProfile(documentCount);
THolder<IFeaturesBlockIterator> featuresBlockIterator
= CreateFeaturesBlockIterator(model, *dataset.ObjectsData, 0, documentCount);
for (size_t start = 0; start < documentCount; start += documentBlockSize) {
size_t end = Min(start + documentBlockSize, documentCount);
processDocumentsProfile.StartIterationBlock();
featuresBlockIterator->NextBlock(end - start);
CalcShapValuesForDocumentBlockMulti(
model,
*featuresBlockIterator,
flatFeatureCount,
*preparedTrees,
fixedFeatureParams,
start,
end,
localExecutor,
&shapValues,
calcType
);
processDocumentsProfile.FinishIterationBlock(end - start);
auto profileResults = processDocumentsProfile.GetProfileResults();
documentsLogger.Log(profileResults);
}
return shapValues;
}
TVector<TVector<TVector<double>>> CalcShapValuesMulti(
const TFullModel& model,
const TDataProvider& dataset,
const TMaybe<TFixedFeatureParams>& fixedFeatureParams,
int logPeriod,
EPreCalcShapValues mode,
NPar::TLocalExecutor* localExecutor,
ECalcTypeShapValues calcType
) {
TShapPreparedTrees preparedTrees = PrepareTrees(
model,
&dataset,
mode,
localExecutor,
/*calcInternalValues*/ false,
calcType
);
CalcShapValuesByLeaf(
model,
fixedFeatureParams,
logPeriod,
preparedTrees.CalcInternalValues,
localExecutor,
&preparedTrees,
calcType
);
return CalcShapValuesWithPreparedTrees(
model,
dataset,
fixedFeatureParams,
logPeriod,
&preparedTrees,
localExecutor,
calcType
);
}
TVector<TVector<double>> CalcShapValues(
const TFullModel& model,
const TDataProvider& dataset,
const TMaybe<TFixedFeatureParams>& fixedFeatureParams,
int logPeriod,
EPreCalcShapValues mode,
NPar::TLocalExecutor* localExecutor,
ECalcTypeShapValues calcType
) {
CB_ENSURE(model.ModelTrees->GetDimensionsCount() == 1, "Model must not be trained for multiclassification.");
TVector<TVector<TVector<double>>> shapValuesMulti = CalcShapValuesMulti(
model,
dataset,
fixedFeatureParams,
logPeriod,
mode,
localExecutor,
calcType
);
size_t documentsCount = dataset.ObjectsGrouping->GetObjectCount();
TVector<TVector<double>> shapValues(documentsCount);
for (size_t documentIdx = 0; documentIdx < documentsCount; ++documentIdx) {
shapValues[documentIdx] = std::move(shapValuesMulti[documentIdx][0]);
}
return shapValues;
}
static TVector<TVector<TVector<double>>> SwapFeatureAndDocumentAxes(const TVector<TVector<TVector<double>>>& shapValues) {
const size_t approxDimension = shapValues[0].size();
const size_t documentCount = shapValues.size();
const size_t featuresCount = shapValues[0][0].size();
TVector<TVector<TVector<double>>> swapedShapValues(featuresCount);
for (size_t featureIdx = 0; featureIdx < featuresCount; ++featureIdx) {
swapedShapValues[featureIdx].resize(approxDimension);
for (size_t dimension = 0; dimension < approxDimension; ++dimension) {
swapedShapValues[featureIdx][dimension].resize(documentCount);
for (size_t documentIdx = 0; documentIdx < documentCount; ++documentIdx) {
swapedShapValues[featureIdx][dimension][documentIdx] = shapValues[documentIdx][dimension][featureIdx];
}
}
}
return swapedShapValues;
}
// returned: ShapValues[featureIdx][dim][documentIdx]
TVector<TVector<TVector<double>>> CalcShapValueWithQuantizedData(
const TFullModel& model,
const TVector<TIntrusivePtr<NModelEvaluation::IQuantizedData>>& quantizedFeatures,
const TVector<TVector<NModelEvaluation::TCalcerIndexType>>& indices,
const TMaybe<TFixedFeatureParams>& fixedFeatureParams,
const size_t documentCount,
int logPeriod,
TShapPreparedTrees* preparedTrees,
NPar::TLocalExecutor* localExecutor,
ECalcTypeShapValues calcType
) {
CalcShapValuesByLeaf(
model,
fixedFeatureParams,
logPeriod,
preparedTrees->CalcInternalValues,
localExecutor,
preparedTrees,
calcType
);
const TModelTrees& forest = *model.ModelTrees;
TVector<TVector<TVector<double>>> shapValues(documentCount);
const int featuresCount = preparedTrees->CombinationClassFeatures.size();
const size_t documentBlockSize = CB_THREAD_LIMIT;
for (ui32 startIdx = 0, blockIdx = 0; startIdx < documentCount; startIdx += documentBlockSize, ++blockIdx) {
NPar::TLocalExecutor::TExecRangeParams blockParams(startIdx, startIdx + Min(documentBlockSize, documentCount - startIdx));
auto quantizedFeaturesBlock = quantizedFeatures[blockIdx];
auto& indicesForBlock = indices[blockIdx];
localExecutor->ExecRange([&](ui32 documentIdx) {
const size_t documentIdxInBlock = documentIdx - startIdx;
auto docIndices = MakeArrayRef(indicesForBlock.data() + forest.GetTreeCount() * documentIdxInBlock, forest.GetTreeCount());
CalcShapValuesForDocumentMulti(
model,
*preparedTrees,
quantizedFeaturesBlock.Get(),
fixedFeatureParams,
featuresCount,
docIndices,
documentIdxInBlock,
&shapValues[documentIdx],
calcType
);
}, blockParams, NPar::TLocalExecutor::WAIT_COMPLETE);
}
const auto& swapedShapValues = SwapFeatureAndDocumentAxes(shapValues);
return swapedShapValues;
}
static void OutputShapValuesMulti(const TVector<TVector<TVector<double>>>& shapValues, TFileOutput& out) {
for (const auto& shapValuesForDocument : shapValues) {
for (const auto& shapValuesForClass : shapValuesForDocument) {
int valuesCount = shapValuesForClass.size();
for (int valueIdx = 0; valueIdx < valuesCount; ++valueIdx) {
out << shapValuesForClass[valueIdx] << (valueIdx + 1 == valuesCount ? '\n' : '\t');
}
}
}
}
void CalcAndOutputShapValues(
const TFullModel& model,
const TDataProvider& dataset,
const TString& outputPath,
int logPeriod,
EPreCalcShapValues mode,
NPar::TLocalExecutor* localExecutor,
ECalcTypeShapValues calcType
) {
TShapPreparedTrees preparedTrees = PrepareTrees(
model,
&dataset,
mode,
localExecutor,
/*calcInternalValues*/ false,
calcType
);
CalcShapValuesByLeaf(
model,
/*fixedFeatureParams*/ Nothing(),
logPeriod,
preparedTrees.CalcInternalValues,
localExecutor,
&preparedTrees,
calcType
);
CB_ENSURE_SCALE_IDENTITY(model.GetScaleAndBias(), "SHAP values");
const int flatFeatureCount = SafeIntegerCast<int>(dataset.MetaInfo.GetFeatureCount());
const size_t documentCount = dataset.ObjectsGrouping->GetObjectCount();
const size_t documentBlockSize = CB_THREAD_LIMIT; // least necessary for threading
TImportanceLogger documentsLogger(documentCount, "documents processed", "Processing documents...", logPeriod);
TProfileInfo processDocumentsProfile(documentCount);
THolder<IFeaturesBlockIterator> featuresBlockIterator
= CreateFeaturesBlockIterator(model, *dataset.ObjectsData, 0, documentCount);
TFileOutput out(outputPath);
for (size_t start = 0; start < documentCount; start += documentBlockSize) {
size_t end = Min(start + documentBlockSize, documentCount);
processDocumentsProfile.StartIterationBlock();
TVector<TVector<TVector<double>>> shapValuesForBlock;
shapValuesForBlock.reserve(end - start);
featuresBlockIterator->NextBlock(end - start);
CalcShapValuesForDocumentBlockMulti(
model,
*featuresBlockIterator,
flatFeatureCount,
preparedTrees,
/*fixedFeatureParams*/ Nothing(),
start,
end,
localExecutor,
&shapValuesForBlock,
calcType
);
OutputShapValuesMulti(shapValuesForBlock, out);
processDocumentsProfile.FinishIterationBlock(end - start);
auto profileResults = processDocumentsProfile.GetProfileResults();
documentsLogger.Log(profileResults);
}
}
|
/**
* Copyright (C) 2019-2022 Xilinx, Inc
*
* Licensed under the Apache License, Version 2.0 (the "License"). You may
* not use this file except in compliance with the License. A copy of the
* License is located at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
// ------ I N C L U D E F I L E S -------------------------------------------
// Local - Include Files
#include "XBUtilitiesCore.h"
// 3rd Party Library - Include Files
#include <boost/property_tree/json_parser.hpp>
#include <boost/tokenizer.hpp>
#include <boost/format.hpp>
#include <boost/algorithm/string/split.hpp>
// System - Include Files
#include <iostream>
#include <map>
#include <regex>
// ------ N A M E S P A C E ---------------------------------------------------
using namespace XBUtilities;
// ------ S T A T I C V A R I A B L E S -------------------------------------
static bool m_bVerbose = false;
static bool m_bTrace = false;
static bool m_disableEscapeCodes = false;
static bool m_bShowHidden = false;
static bool m_bForce = false;
// ------ F U N C T I O N S ---------------------------------------------------
void
XBUtilities::setVerbose(bool _bVerbose)
{
bool prevVerbose = m_bVerbose;
if ((prevVerbose == true) && (_bVerbose == false))
verbose("Disabling Verbosity");
m_bVerbose = _bVerbose;
if ((prevVerbose == false) && (_bVerbose == true))
verbose("Enabling Verbosity");
}
bool
XBUtilities::getVerbose()
{
return m_bVerbose;
}
void
XBUtilities::setTrace(bool _bTrace)
{
if (_bTrace)
trace("Enabling Tracing");
else
trace("Disabling Tracing");
m_bTrace = _bTrace;
}
void
XBUtilities::setShowHidden(bool _bShowHidden)
{
if (_bShowHidden)
trace("Hidden commands and options will be shown.");
else
trace("Hidden commands and options will be hidden");
m_bShowHidden = _bShowHidden;
}
bool
XBUtilities::getShowHidden()
{
return m_bShowHidden;
}
void
XBUtilities::setForce(bool _bForce)
{
m_bForce = _bForce;
if (m_bForce)
trace("Enabling force option");
else
trace("Disabling force option");
}
bool
XBUtilities::getForce()
{
return m_bForce;
}
void
XBUtilities::disable_escape_codes(bool _disable)
{
m_disableEscapeCodes = _disable;
}
bool
XBUtilities::is_escape_codes_disabled() {
return m_disableEscapeCodes;
}
void
XBUtilities::message_(MessageType _eMT, const std::string& _msg, bool _endl, std::ostream & _ostream)
{
static std::map<MessageType, std::string> msgPrefix = {
{ MT_MESSAGE, "" },
{ MT_INFO, "Info: " },
{ MT_WARNING, "Warning: " },
{ MT_ERROR, "Error: " },
{ MT_VERBOSE, "Verbose: " },
{ MT_FATAL, "Fatal: " },
{ MT_TRACE, "Trace: " },
{ MT_UNKNOWN, "<type unknown>: " },
};
// A simple DRC check
if (_eMT > MT_UNKNOWN) {
_eMT = MT_UNKNOWN;
}
// Verbosity is not enabled
if ((m_bVerbose == false) && (_eMT == MT_VERBOSE)) {
return;
}
// Tracing is not enabled
if ((m_bTrace == false) && (_eMT == MT_TRACE)) {
return;
}
_ostream << msgPrefix[_eMT] << _msg;
if (_endl == true) {
_ostream << std::endl;
}
}
void
XBUtilities::message(const std::string& _msg, bool _endl, std::ostream & _ostream)
{
message_(MT_MESSAGE, _msg, _endl, _ostream);
}
void
XBUtilities::info(const std::string& _msg, bool _endl)
{
message_(MT_INFO, _msg, _endl);
}
void
XBUtilities::warning(const std::string& _msg, bool _endl)
{
message_(MT_WARNING, _msg, _endl);
}
void
XBUtilities::error(const std::string& _msg, bool _endl)
{
message_(MT_ERROR, _msg, _endl);
}
void
XBUtilities::verbose(const std::string& _msg, bool _endl)
{
message_(MT_VERBOSE, _msg, _endl);
}
void
XBUtilities::fatal(const std::string& _msg, bool _endl)
{
message_(MT_FATAL, _msg, _endl);
}
void
XBUtilities::trace(const std::string& _msg, bool _endl)
{
message_(MT_TRACE, _msg, _endl);
}
void
XBUtilities::trace_print_tree(const std::string & _name,
const boost::property_tree::ptree & _pt)
{
if (m_bTrace == false) {
return;
}
XBUtilities::trace(_name + " (JSON Tree)");
std::ostringstream buf;
boost::property_tree::write_json(buf, _pt, true /*Pretty print*/);
XBUtilities::message(buf.str());
}
std::string
XBUtilities::wrap_paragraphs( const std::string & unformattedString,
unsigned int indentWidth,
unsigned int columnWidth,
bool indentFirstLine) {
std::vector<std::string> lines;
// Process the string
std::string workingString;
for (const auto &entry : unformattedString) {
// Do we have a new line added by the user
if (entry == '\n') {
lines.push_back(workingString);
workingString.clear();
continue;
}
workingString += entry;
// Check to see if this string is too long
if (workingString.size() >= columnWidth) {
// Find the beginning of the previous 'word'
auto index = workingString.find_last_of(" ");
// None found, keep on adding characters till we find a space
if (index == std::string::npos)
continue;
// Add the line and populate the next line
lines.push_back(workingString.substr(0, index));
workingString = workingString.substr(index + 1);
}
}
if (!workingString.empty())
lines.push_back(workingString);
// Early exit, nothing here
if (lines.size() == 0)
return std::string();
// -- Build the formatted string
std::string formattedString;
// Iterate over the lines building the formatted string
const std::string indention(indentWidth, ' ');
auto iter = lines.begin();
while (iter != lines.end()) {
// Add an indention
if (iter != lines.begin() || indentFirstLine)
formattedString += indention;
// Add formatted line
formattedString += *iter;
// Don't add a '\n' on the last line
if (++iter != lines.end())
formattedString += "\n";
}
return formattedString;
}
bool
XBUtilities::can_proceed(bool force)
{
bool proceed = false;
std::string input;
std::cout << "Are you sure you wish to proceed? [Y/n]: ";
if (force)
std::cout << "Y (Force override)" << std::endl;
else
std::getline(std::cin, input);
// Ugh, the std::transform() produces windows compiler warnings due to
// conversions from 'int' to 'char' in the algorithm header file
boost::algorithm::to_lower(input);
//std::transform( input.begin(), input.end(), input.begin(), [](unsigned char c){ return std::tolower(c); });
//std::transform( input.begin(), input.end(), input.begin(), ::tolower);
// proceeds for "y", "Y" and no input
proceed = ((input.compare("y") == 0) || input.empty());
if (!proceed)
std::cout << "Action canceled." << std::endl;
return proceed;
}
void
XBUtilities::sudo_or_throw_err()
{
#ifndef _WIN32
if ((getuid() == 0) || (geteuid() == 0))
return;
std::cout << "ERROR: root privileges required." << std::endl;
throw std::errc::operation_canceled;
#endif
}
|
Located above the surface of our planet is a complex mixture of gases and suspended liquid and solid particles known as the atmosphere. Operating within the atmosphere is a variety of processes we call weather. Some measurable variables associated with weather include air temperature, air pressure, humidity, wind, and precipitation. The atmosphere also contains organized phenomena that include things like tornadoes, thunderstorms, mid-latitude cyclones, hurricanes, and monsoons. Climate refers to the general pattern of weather for a region over specific period of time. Scientists have discovered that human activities can influence Earth’s climate and weather producing problems like global warming, ozone depletion, and acid precipitation.
Widespread urban development alters weather patterns
Research focusing on the Houston area suggests that widespread urban development alters weather patterns in a way that ...
Laptev SeaLast Updated on 2013-05-14 at 14:23
The Laptev Sea is a saline water body, lodged between the Kara Sea and East Siberian Sea. The chief land boundary of this marginal sea of the Arctic Ocean is the Siberian... More »
East Siberian SeaLast Updated on 2013-05-14 at 14:09
The East Siberian Sea is a saline marine body, which is a southern marginal sea of the Arctic Ocean.
To the east is found the Chukchi Sea and to the west beyond the New... More »
Baffin BayLast Updated on 2013-05-14 at 12:11
Baffin Bay is a margibnal sea of the North Atlantic Ocean located between the Canada's Baffin, Devon and Ellesmere islands and Greenland.
To the south the Davis Strait... More »
Andaman SeaLast Updated on 2013-05-13 at 23:06
The Andaman Sea is a body of marine water in the northeastern corner of the Indian Ocean that lies to the west of the Malay Peninsula, the north of Sumatra, the east of the... More »
Molucca SeaLast Updated on 2013-05-13 at 23:02
The Molucca Sea (also Molukka Sea) is a semi-enclosed sea, surrounded by a variety of islands belonging to Indonesia, most significantly the island of Sulawesi (Celebes)... More »
Levantine SeaLast Updated on 2013-05-13 at 22:31
The Levantine Sea is most eastern unit of the Mediterranean Sea, and also the most saline portion of the Mediterranean Basin.
The Levantine Sea, also known as the Levant... More »
|
// Copyright (c) 2013-2016 Anton Kozhevnikov, Thomas Schulthess
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without modification, are permitted provided that
// the following conditions are met:
//
// 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the
// following disclaimer.
// 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions
// and the following disclaimer in the documentation and/or other materials provided with the distribution.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED
// WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
// PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
// ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
// CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
// OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
/** \file step_function.cpp
*
* \brief Contains remaining implementation of sirius::Step_function class.
*/
#include "step_function.h"
namespace sirius {
Step_function::Step_function(Unit_cell const& unit_cell__,
FFT3D* fft__,
Gvec_FFT_distribution const& gvec_fft_distr__,
Communicator const& comm__)
{
PROFILE();
if (unit_cell__.num_atoms() == 0) {
return;
}
auto& gvec = gvec_fft_distr__.gvec();
auto ffac = get_step_function_form_factors(gvec.num_shells(), unit_cell__, gvec, comm__);
step_function_pw_.resize(gvec.num_gvec());
step_function_.resize(fft__->local_size());
std::vector<double_complex> f_pw = unit_cell__.make_periodic_function(ffac, gvec);
for (int ig = 0; ig < gvec.num_gvec(); ig++) {
step_function_pw_[ig] = -f_pw[ig];
}
step_function_pw_[0] += 1.0;
fft__->prepare();
fft__->transform<1>(gvec_fft_distr__, &step_function_pw_[gvec_fft_distr__.offset_gvec_fft()]);
fft__->output(&step_function_[0]);
fft__->dismiss();
double vit = 0.0;
for (int i = 0; i < fft__->local_size(); i++) {
vit += step_function_[i];
}
vit *= (unit_cell__.omega() / fft__->size());
fft__->comm().allreduce(&vit, 1);
if (std::abs(vit - unit_cell__.volume_it()) > 1e-10) {
std::stringstream s;
s << "step function gives a wrong volume for IT region" << std::endl
<< " difference with exact value : " << std::abs(vit - unit_cell__.volume_it());
WARNING(s);
}
#ifdef __PRINT_OBJECT_CHECKSUM
//double_complex z1 = mdarray<double_complex, 1>(&step_function_pw_[0], fft__->local_size()).checksum();
double d1 = mdarray<double, 1>(&step_function_[0], fft__->local_size()).checksum();
comm__.allreduce(&d1, 1);
DUMP("checksum(step_function): %18.10f", d1);
//DUMP("checksum(step_function_pw): %18.10f %18.10f", std::real(z1), std::imag(z1));
#endif
}
mdarray<double, 2> Step_function::get_step_function_form_factors(int num_gsh,
Unit_cell const& unit_cell__,
Gvec const& gvec__,
Communicator const& comm__) const
{
mdarray<double, 2> ffac(unit_cell__.num_atom_types(), num_gsh);
splindex<block> spl_num_gvec_shells(num_gsh, comm__.size(), comm__.rank());
#pragma omp parallel for
for (int igsloc = 0; igsloc < spl_num_gvec_shells.local_size(); igsloc++)
{
int igs = spl_num_gvec_shells[igsloc];
double G = gvec__.shell_len(igs);
double g3inv = (igs) ? 1.0 / std::pow(G, 3) : 0.0;
for (int iat = 0; iat < unit_cell__.num_atom_types(); iat++)
{
double R = unit_cell__.atom_type(iat).mt_radius();
double GR = G * R;
ffac(iat, igs) = (igs) ? (std::sin(GR) - GR * std::cos(GR)) * g3inv : std::pow(R, 3) / 3.0;
}
}
int ld = unit_cell__.num_atom_types();
comm__.allgather(ffac.at<CPU>(), ld * spl_num_gvec_shells.global_offset(), ld * spl_num_gvec_shells.local_size());
return ffac;
}
}
|
#ifndef XEMMAI__SYMBOL_H
#define XEMMAI__SYMBOL_H
#include "object.h"
namespace xemmai
{
class t_symbol
{
friend class t_object;
friend struct t_finalizes<t_bears<t_symbol>>;
friend struct t_type_of<t_object>;
friend struct t_type_of<t_type>;
friend struct t_type_of<t_symbol>;
std::map<std::wstring, t_slot, std::less<>>::iterator v_entry;
t_symbol(std::map<std::wstring, t_slot, std::less<>>::iterator a_entry) : v_entry(a_entry)
{
v_entry->second = t_object::f_of(this);
}
~t_symbol();
public:
XEMMAI__PORTABLE__EXPORT static t_object* f_instantiate(std::wstring_view a_value);
const std::wstring& f_string() const
{
return v_entry->first;
}
};
template<>
struct t_type_of<t_symbol> : t_holds<t_symbol>
{
void f_define();
using t_base::t_base;
static void f_do_scan(t_object* a_this, t_scan a_scan)
{
a_scan(a_this->f_as<t_symbol>().v_entry->second);
}
void f_do_instantiate(t_pvalue* a_stack, size_t a_n);
};
}
#endif
|
////////////////////////////////////////////////////////////
//
// SFML - Simple and Fast Multimedia Library
// Copyright (C) 2007-2018 Laurent Gomila ([email protected])
//
// This software is provided 'as-is', without any express or implied warranty.
// In no event will the authors be held liable for any damages arising from the use of this software.
//
// Permission is granted to anyone to use this software for any purpose,
// including commercial applications, and to alter it and redistribute it freely,
// subject to the following restrictions:
//
// 1. The origin of this software must not be misrepresented;
// you must not claim that you wrote the original software.
// If you use this software in a product, an acknowledgment
// in the product documentation would be appreciated but is not required.
//
// 2. Altered source versions must be plainly marked as such,
// and must not be misrepresented as being the original software.
//
// 3. This notice may not be removed or altered from any source distribution.
//
////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////
// Headers
////////////////////////////////////////////////////////////
#include <SFML/Graphics/Shader.h>
#include <SFML/Graphics/ShaderStruct.h>
#include <SFML/Graphics/TextureStruct.h>
#include <SFML/Graphics/ConvertTransform.hpp>
#include <SFML/Internal.h>
#include <SFML/CallbackStream.h>
////////////////////////////////////////////////////////////
sfShader* sfShader_createFromFile(const char* vertexShaderFilename, const char* geometryShaderFilename, const char* fragmentShaderFilename)
{
bool success = false;
sfShader* shader = new sfShader;
if (vertexShaderFilename || geometryShaderFilename || fragmentShaderFilename)
{
if (!geometryShaderFilename)
{
if (!vertexShaderFilename)
{
// fragment shader only
success = shader->This.loadFromFile(fragmentShaderFilename, sf::Shader::Fragment);
}
else if (!fragmentShaderFilename)
{
// vertex shader only
success = shader->This.loadFromFile(vertexShaderFilename, sf::Shader::Vertex);
}
else
{
// vertex + fragment shaders
success = shader->This.loadFromFile(vertexShaderFilename, fragmentShaderFilename);
}
}
else
{
if (!vertexShaderFilename && !fragmentShaderFilename)
{
// geometry shader only
success = shader->This.loadFromFile(geometryShaderFilename, sf::Shader::Geometry);
}
else
{
// vertex + geometry + fragment shaders
success = shader->This.loadFromFile(vertexShaderFilename, geometryShaderFilename, fragmentShaderFilename);
}
}
}
if (!success)
{
delete shader;
shader = NULL;
}
return shader;
}
////////////////////////////////////////////////////////////
sfShader* sfShader_createFromMemory(const char* vertexShader, const char* geometryShader, const char* fragmentShader)
{
bool success = false;
sfShader* shader = new sfShader;
if (vertexShader || geometryShader || fragmentShader)
{
if (!geometryShader)
{
if (!vertexShader)
{
// fragment shader only
success = shader->This.loadFromMemory(fragmentShader, sf::Shader::Fragment);
}
else if (!fragmentShader)
{
// vertex shader only
success = shader->This.loadFromMemory(vertexShader, sf::Shader::Vertex);
}
else
{
// vertex + fragment shaders
success = shader->This.loadFromMemory(vertexShader, fragmentShader);
}
}
else
{
if (!vertexShader && !fragmentShader)
{
// geometry shader only
success = shader->This.loadFromMemory(geometryShader, sf::Shader::Geometry);
}
else
{
// vertex + geometry + fragment shaders
success = shader->This.loadFromMemory(vertexShader, geometryShader, fragmentShader);
}
}
}
if (!success)
{
delete shader;
shader = NULL;
}
return shader;
}
////////////////////////////////////////////////////////////
sfShader* sfShader_createFromStream(sfInputStream* vertexShaderStream, sfInputStream* geometryShaderStream, sfInputStream* fragmentShaderStream)
{
bool success = false;
sfShader* shader = new sfShader;
if (vertexShaderStream || geometryShaderStream || fragmentShaderStream)
{
if (!geometryShaderStream)
{
if (!vertexShaderStream)
{
// fragment shader only
CallbackStream stream(fragmentShaderStream);
success = shader->This.loadFromStream(stream, sf::Shader::Fragment);
}
else if (!fragmentShaderStream)
{
// vertex shader only
CallbackStream stream(vertexShaderStream);
success = shader->This.loadFromStream(stream, sf::Shader::Vertex);
}
else
{
// vertex + fragment shaders
CallbackStream vertexStream(vertexShaderStream);
CallbackStream fragmentStream(fragmentShaderStream);
success = shader->This.loadFromStream(vertexStream, fragmentStream);
}
}
else
{
CallbackStream geometryStream(geometryShaderStream);
if (!vertexShaderStream && !fragmentShaderStream)
{
// geometry shader only
success = shader->This.loadFromStream(geometryStream, sf::Shader::Geometry);
}
else
{
// vertex + geometry + fragment shaders
CallbackStream vertexStream(vertexShaderStream);
CallbackStream fragmentStream(fragmentShaderStream);
success = shader->This.loadFromStream(vertexStream, geometryStream, fragmentStream);
}
}
}
if (!success)
{
delete shader;
shader = NULL;
}
return shader;
}
////////////////////////////////////////////////////////////
void sfShader_destroy(sfShader* shader)
{
delete shader;
}
////////////////////////////////////////////////////////////
void sfShader_setFloatUniform(sfShader* shader, const char* name, float x)
{
CSFML_CALL(shader, setUniform(name, x));
}
////////////////////////////////////////////////////////////
void sfShader_setVec2Uniform(sfShader* shader, const char* name, sfGlslVec2 vector)
{
CSFML_CALL(shader, setUniform(name, sf::Glsl::Vec2(vector.x, vector.y)));
}
////////////////////////////////////////////////////////////
void sfShader_setVec3Uniform(sfShader* shader, const char* name, sfGlslVec3 vector)
{
CSFML_CALL(shader, setUniform(name, sf::Glsl::Vec3(vector.x, vector.y, vector.z)));
}
////////////////////////////////////////////////////////////
void sfShader_setVec4Uniform(sfShader* shader, const char* name, sfGlslVec4 vector)
{
CSFML_CALL(shader, setUniform(name, sf::Glsl::Vec4(vector.x, vector.y, vector.z, vector.w)));
}
////////////////////////////////////////////////////////////
void sfShader_setColorUniform(sfShader* shader, const char* name, sfColor color)
{
sfGlslVec4 vec4;
vec4.x = color.r / 255.f;
vec4.y = color.g / 255.f;
vec4.z = color.b / 255.f;
vec4.w = color.a / 255.f;
sfShader_setVec4Uniform(shader, name, vec4);
}
////////////////////////////////////////////////////////////
void sfShader_setIntUniform(sfShader* shader, const char* name, int x)
{
CSFML_CALL(shader, setUniform(name, x));
}
////////////////////////////////////////////////////////////
void sfShader_setIvec2Uniform(sfShader* shader, const char* name, sfGlslIvec2 vector)
{
CSFML_CALL(shader, setUniform(name, sf::Glsl::Ivec2(vector.x, vector.y)));
}
////////////////////////////////////////////////////////////
void sfShader_setIvec3Uniform(sfShader* shader, const char* name, sfGlslIvec3 vector)
{
CSFML_CALL(shader, setUniform(name, sf::Glsl::Ivec3(vector.x, vector.y, vector.z)));
}
////////////////////////////////////////////////////////////
void sfShader_setIvec4Uniform(sfShader* shader, const char* name, sfGlslIvec4 vector)
{
CSFML_CALL(shader, setUniform(name, sf::Glsl::Ivec4(vector.x, vector.y, vector.z, vector.w)));
}
////////////////////////////////////////////////////////////
void sfShader_setIntColorUniform(sfShader* shader, const char* name, sfColor color)
{
sfGlslIvec4 ivec4;
ivec4.x = (int)color.r;
ivec4.y = (int)color.g;
ivec4.z = (int)color.b;
ivec4.w = (int)color.a;
sfShader_setIvec4Uniform(shader, name, ivec4);
}
////////////////////////////////////////////////////////////
void sfShader_setBoolUniform(sfShader* shader, const char* name, sfBool x)
{
CSFML_CALL(shader, setUniform(name, x != sfFalse));
}
////////////////////////////////////////////////////////////
void sfShader_setBvec2Uniform(sfShader* shader, const char* name, sfGlslBvec2 vector)
{
CSFML_CALL(shader, setUniform(name, sf::Glsl::Bvec2(vector.x != sfFalse, vector.y != sfFalse)));
}
////////////////////////////////////////////////////////////
void sfShader_setBvec3Uniform(sfShader* shader, const char* name, sfGlslBvec3 vector)
{
CSFML_CALL(shader, setUniform(name, sf::Glsl::Bvec3(vector.x != sfFalse, vector.y != sfFalse, vector.z != sfFalse)));
}
////////////////////////////////////////////////////////////
void sfShader_setBvec4Uniform(sfShader* shader, const char* name, sfGlslBvec4 vector)
{
CSFML_CALL(shader, setUniform(name, sf::Glsl::Bvec4(vector.x != sfFalse, vector.y != sfFalse, vector.z != sfFalse, vector.w != sfFalse)));
}
////////////////////////////////////////////////////////////
void sfShader_setMat3Uniform(sfShader* shader, const char* name, const sfGlslMat3* matrix)
{
CSFML_CALL(shader, setUniform(name, sf::Glsl::Mat3(matrix->array)));
}
////////////////////////////////////////////////////////////
void sfShader_setMat4Uniform(sfShader* shader, const char* name, const sfGlslMat4* matrix)
{
CSFML_CALL(shader, setUniform(name, sf::Glsl::Mat4(matrix->array)));
}
////////////////////////////////////////////////////////////
void sfShader_setTextureUniform(sfShader* shader, const char* name, const sfTexture* texture)
{
CSFML_CALL(shader, setUniform(name, *texture->This));
}
////////////////////////////////////////////////////////////
void sfShader_setCurrentTextureUniform(sfShader* shader, const char* name)
{
CSFML_CALL(shader, setUniform(name, sf::Shader::CurrentTexture));
}
////////////////////////////////////////////////////////////
void sfShader_setFloatUniformArray(sfShader* shader, const char* name, const float* scalarArray, size_t length)
{
CSFML_CALL(shader, setUniformArray(name, scalarArray, length));
}
////////////////////////////////////////////////////////////
void sfShader_setVec2UniformArray(sfShader* shader, const char* name, const sfGlslVec2* vectorArray, size_t length)
{
CSFML_CALL(shader, setUniformArray(name, reinterpret_cast<const sf::Glsl::Vec2*>(vectorArray), length));
}
////////////////////////////////////////////////////////////
void sfShader_setVec3UniformArray(sfShader* shader, const char* name, const sfGlslVec3* vectorArray, size_t length)
{
CSFML_CALL(shader, setUniformArray(name, reinterpret_cast<const sf::Glsl::Vec3*>(vectorArray), length));
}
////////////////////////////////////////////////////////////
void sfShader_setVec4UniformArray(sfShader* shader, const char* name, const sfGlslVec4* vectorArray, size_t length)
{
CSFML_CALL(shader, setUniformArray(name, reinterpret_cast<const sf::Glsl::Vec4*>(vectorArray), length));
}
////////////////////////////////////////////////////////////
void sfShader_setMat3UniformArray(sfShader* shader, const char* name, const sfGlslMat3* matrixArray, size_t length)
{
CSFML_CALL(shader, setUniformArray(name, reinterpret_cast<const sf::Glsl::Mat3*>(matrixArray), length));
}
////////////////////////////////////////////////////////////
void sfShader_setMat4UniformArray(sfShader* shader, const char* name, const sfGlslMat4* matrixArray, size_t length)
{
CSFML_CALL(shader, setUniformArray(name, reinterpret_cast<const sf::Glsl::Mat4*>(matrixArray), length));
}
////////////////////////////////////////////////////////////
void sfShader_setFloatParameter(sfShader* shader, const char* name, float x)
{
CSFML_CALL(shader, setParameter(name, x));
}
////////////////////////////////////////////////////////////
void sfShader_setFloat2Parameter(sfShader* shader, const char* name, float x, float y)
{
CSFML_CALL(shader, setParameter(name, x, y));
}
////////////////////////////////////////////////////////////
void sfShader_setFloat3Parameter(sfShader* shader, const char* name, float x, float y, float z)
{
CSFML_CALL(shader, setParameter(name, x, y, z));
}
////////////////////////////////////////////////////////////
void sfShader_setFloat4Parameter(sfShader* shader, const char* name, float x, float y, float z, float w)
{
CSFML_CALL(shader, setParameter(name, x, y, z, w));
}
////////////////////////////////////////////////////////////
void sfShader_setVector2Parameter(sfShader* shader, const char* name, sfVector2f vector)
{
CSFML_CALL(shader, setParameter(name, sf::Vector2f(vector.x, vector.y)));
}
////////////////////////////////////////////////////////////
void sfShader_setVector3Parameter(sfShader* shader, const char* name, sfVector3f vector)
{
CSFML_CALL(shader, setParameter(name, sf::Vector3f(vector.x, vector.y, vector.z)));
}
////////////////////////////////////////////////////////////
void sfShader_setColorParameter(sfShader* shader, const char* name, sfColor color)
{
CSFML_CALL(shader, setParameter(name, sf::Color(color.r, color.g, color.b, color.a)));
}
////////////////////////////////////////////////////////////
void sfShader_setTransformParameter(sfShader* shader, const char* name, sfTransform transform)
{
CSFML_CALL(shader, setParameter(name, convertTransform(transform)));
}
////////////////////////////////////////////////////////////
void sfShader_setTextureParameter(sfShader* shader, const char* name, const sfTexture* texture)
{
CSFML_CHECK(texture);
CSFML_CALL(shader, setParameter(name,*texture->This));
}
////////////////////////////////////////////////////////////
void sfShader_setCurrentTextureParameter(sfShader* shader, const char* name)
{
CSFML_CALL(shader, setParameter(name, sf::Shader::CurrentTexture));
}
////////////////////////////////////////////////////////////
unsigned int sfShader_getNativeHandle(const sfShader* shader)
{
CSFML_CALL_RETURN(shader, getNativeHandle(), 0);
}
////////////////////////////////////////////////////////////
void sfShader_bind(const sfShader* shader)
{
sf::Shader::bind(shader ? &shader->This : NULL);
}
////////////////////////////////////////////////////////////
sfBool sfShader_isAvailable(void)
{
return sf::Shader::isAvailable() ? sfTrue : sfFalse;
}
////////////////////////////////////////////////////////////
sfBool sfShader_isGeometryAvailable(void)
{
return sf::Shader::isGeometryAvailable() ? sfTrue : sfFalse;
}
|
Mouseland (why do people vote against their self-interest?)
“All the laws were good laws. For cats.”—Tommy Douglas
The other day I called Telus to cancel my campaign phones and internet service. Groan. After listening to canned music for 17 minutes I was transferred to a customer service rep. He took my information, argued with me when I wouldn’t give him my email address, and finally transferred me to the cancellation department…where I was put on hold. Groan. After listening to canned music for 6 minutes the cancellation rep came on the line.
She was a lovely woman who follows politics closely. We had a lively discussion about the recent by-elections and the astounding fact that the PCs were elected in all four ridings.
“Why do Albertans continue to vote the PCs into office?” I wondered.
“Ah,” she said, “you should check out Tommy Douglas’ Mouseland speech”. So I did.
Mouseland is a very short (and humorous) speech that is as relevant today as it was when Tommy Douglas first gave it in 1944.
Here’s the link. This is the animated version that’s introduced by Tommy Douglas’ grandson, Kiefer Sutherland. I’ll wait here while you check it out.
…(canned music)…
Did you check it out?
OK here’s the Cole’s Notes version.
Mouseland is a place where all the little mice lived. They had a Parliament and voted every 4 years. On election day all the little mice would go to the polls and elect a government—a government of made up of big, fat, black cats.
The cats passed good laws. Laws that were good for cats; but oh so hard on mice. They made mouseholes big enough for a cat’s paw to fit in and set speed limits on mice so they’d be easier to catch.
Life was hard for the mice and they finally decided to do something about it. They voted the black cats out and replaced them with …. white cats (who said things would be different). But things just got harder.
So the mice voted the white cats out and put the black cats back in, then they tried half white and half black cats (a coalition). But the trouble wasn’t the colour of the cats, it was that they were cats. And because they were cats, they looked after cats, not mice.
Finally a little mouse had an idea…instead of electing a government made up of cats, why don’t we elect a government made up of mice?
If you clicked the link you’ll know the joke comes here. Go ahead, check, I’ll wait.
(…canned music…)
Albertans, unlike the little mice, don’t bother voting in different coloured cats. They continue to vote for the same cats wearing different coloured collars. Redford had a red collar. Prentice has a reddish-blue collar. But anyone who has ever owned a cat knows that unless the collar is so tight it almost strangles the cat, he won’t be wearing it for long.
Okay, enough about cats and mice.
What’s the matter with Kansas?
Why do people continue to support conservative governments bent on deregulation, privatization and subsidization of corporations (and the wealthy) at the expense of public education, public healthcare, the frail, the elderly and the poor?
This question has bedeviled political scientists for decades.
Here are two competing theories:
The duping hypothesis: Thomas Frank, author of What’s the Matter with Kansas?, argues that the Right dupes voters into voting against their self-interest by “hooking” them with hot-button social issues like abortion, gun control, and gay marriage and then fanning the flames of a “class divide” (Rob Ford vs latte-drinking effetes).
He attributes the remarkable sea change in Kansas politics—the once progressive state voted 80% in favour of George Bush in 2000—to an anti-abortion demonstration held in Wichita in 1990.
The Republicans, ever mindful of opportunity, saw hundreds of anti-abortionists chain themselves to cars and lie down in the road and said: while we admire your courage and conviction, we’ve got something a lot smarter for you to do than lying on the highway. And it worked. People who weren’t the least bit “political” jumped at the chance to work with and vote for the Republicans and it snowballed from there.
It’s about moral vision, stupid: Jonathan Haidt, author of The Righteous Mind,* says the duping hypothesis is delusional because it lets the Left to absolve itself from blame while avoiding the hard work necessary to develop a successful strategy for the 21st century.
Haidt says what’s really going on is this: the Right aligns itself with lofty moral values like patriotism, social order, strong families, free enterprise and rugged individualism (no nanny state for me!) instead of pedestrian government programs. So a vote for the Right is not a vote against one’s self-interest, but rather a vote in favour of one’s moral ideals.
Do either of these theories ring true in the Alberta context? Or is Mouseland actually Dreamland—a place where we believe that if we leave well enough alone it will all work out in the end?
Over to you Alberta…
(…cannned music…)
33 Responses to Mouseland (why do people vote against their self-interest?)
1. Carlos Beca says:
Susan 17 + 6 = 23 minutes. Not bad for Telus. I am surprised you only waited 23 minutes. Since they implemented the automated systems, which were supposed to offer quicker service, the waiting times are atrocious. On top of that we now get people that think that they have the right to treat you like you should not be calling in the first place. You were lucky that you were not redirected to Costa Rica or India where they have the same fix for every problem – Reboot.
People like Tommy Douglas will probably never be in politics for a long time to come. With the destruction of great Liberal Arts programs around the so called ‘First World’ because it just does not get you a job, it is no surprise that we have people that cannot think – they just repeat over and over what their master tell them to. Without any education in Liberal Arts people are less and less capable of understanding what is going on, make their own minds about difficult issues and try to offer suggestions based on fact, experience and wisdom. The extreme right wing wants them to be consumers and vote for them. Thinking? What is that for? The market will resolve our issues better than we can ever imagine. Just look around.
It is called in my dictionary, self inflicted stupidity and self destruction. The Islamic fundamentalist world does it with human bombs, we are more sophisticated. We guide them so ever gently into an absurd world without any objectives other than having lots of money, do whatever it takes to be a celebrity to ‘make it’ into the elites, and be a consumer.
• Carlos, picking up on your last point, we do indeed “guide them so ever gently into an absurd world without any objectives other than having lots of money, do whatever it takes to be a celebrity to ‘make it’ into the elites, and be a consumer”.
I’ve had this conversation about voting against one’s self-interest with many people. A friend told me that the progressives are asking people to vote against the American Dream. In the US everyone is a millionaire, some of them just haven’t made it yet. So why would they vote in favour of higher taxes and more support for the less fortunate when they don’t see themselves as belonging to that particular demographic for long. Like I said…Dreamland.
And yes, 23 minutes isn’t too bad in the grand scheme of things. I was once on hold with Air Canada for so long I fell asleep and almost lost my place in line when a real person finally came on the line!
• terry says:
You can look forward to the excitement of being a senior who waits: 16 months from paper application to receipt of first payment of Guaranteed Income Supplement — that’s the government subsidy for seniors living in poverty. The amount you get is another topic. Several wait times over 45 minsutes with Alberta Health Seniors because they only have one staff person. Most seniors don’t have a speaker phone, so that requires sitting for that time holding the receiver to your ear. Any request or attempt to get information, correct information, report not receiving money etc will require many such calls.
• Terry I fear things will only get worse now that oil prices have plummeted and the government has even less revenue to spend on health care, education and seniors care. We need to keep the pressure on our MLAs, it’s really all we can do until we get a chance to vote as many of them out of office as possible in 2016 or sooner.
Take care.
2. jvandervlugt says:
Hi Susan. People are petrified of change and trying something new. We’ve become complacent. I remember when Bush (can’t remember whether it was Bush 1 or Bush 2) used fear tactics during his campaign, I couldn’t believe people were falling for it. Whenever his image came on my TV screen hubby would have to change the channel. People would rather stick with the status quo because of fear that the other guy might be worst. C ‘mon, change it up. I hope this makes sense. I’m trying to respond while sitting on a bus.
• Joanna, the fact that Albertans couldn’t bring themselves to elect even one non-PC politician in the four by-elections proves your point. Albertans said they were steaming mad and were going to send the PCs a message…and yet they replaced all four MLAs with more PCs. They could have elected four Wildrose candidates and not have changed the balance of power one iota. But like the little mice, they fell for Prentice’s “new” message (the party is “under new management”, it needs to earn back the trust of the people, it will be accountable to the electorate, etc, etc, etc).
Prentice’s government, just like the Redford government and the Stelmach government before it, is busy taking care of business. The first official act of Stephen Mandel, the health minister, was to enact regulations banning flavoured cigarettes except menthol-flavoured tobacco which scientists say the most effect at luring young people into smoking. Apparently Hal Danchilla, a tobacco lobbyist, had been involved in Mandel and Prentice’s election campaigns. But that’s not it. According to Mandel he needs to do more consultation on menthol before banning it. Funny, when Redford government decided to drop the legal drinking limit to .05 and give itself the power to suspend drivers’ licences for 3 days and impound vehicles, they didn’t see any reason to consult with anyone at all.
3. Carlos Beca says:
LOL Susan about Air Canada. It seems that it is everywhere. We made our services like most of the rest of the World. Instead of improving them, we jumped on the Globalization band wagon and transformed great services into what I call ‘You only have a problem if you can wait on the phone for longer than 20 minutes’.
Susan I had never heard of your suggestion that ‘…. they do not see themselves as belonging to that particular demographic…..’ but it makes a lot of sense. Frankly, in the US I think that without a revolution they will not change. The money and interests are completely entrenched and I doubt it to be possible to convince a regular American citizen that anything rather than cowboy capitalism works. I remember working in Prudhoe Bay – Alaska for a short period of time and I was one of the very few non-americam citizens there. One night discussing poilitics and the different social/political systems around the world, I mentioned that the Nordic countries did very well with social democracy and high taxes and they did not even think about ‘the American dream’. I believe that there were about 5 of us and at least 3 of them told me almost pointing the gun at me, that the only reason they lived in those countries is because they were not allowed to leave. 🙂
As far as Canada I think that we still have a chance to make it to a more common sense position that the one we are now.
By the way continuing my propaganda I would like to suggest this video that it is already dated but very worth it.
• Carlos, your experience with your American co-workers in Prudoe Bay is quite an eye-opener! Firstly, their belief that the residents of a Nordic country would not be allowed to leave is mindbogglingly stupid. Secondly, their failure to understand the relationship between taxation and the government’s ability to deliver social services is a major stumbling block for all progressive parties. The American Dream appears to be based on quantity, not quality. One can demonstrate “quantity” by how much money/credit/debt one can accumulate in order to buy stuff. Quality doesn’t enter into the equation. A citizen’s quality of life is reflected in his ability to access quality education and quality healthcare so that he can maximize his full potential. A citizen’s ability to care for the frail, elderly and those less fortunate allows others to maximize theirs. Only the well-to-do in the US have “quality” as well as “quantity”. Sadly those who expect to join the millionaires club any day now just don’t seem to understand that.
PS Thanks for the link. I look forward to watching it.
4. susan palmer says:
thanks so much for this Susan – what a treat to hear Tommy Douglas’s own telling of his charming – and all too true story.
• You’re welcome Susan. Tommy Douglas illustrated the problem very well. I wish he’d set out a way for the mice to agree on just one mouse to represent them so that we could stop the vote-splitting problem. We’re going to have to get to this solution sooner rather than later if we hope to put more progressive MLAs and MPs into office.
• Carol Wodak says:
Tommy didn’t go on to talk about spotted or striped cats… how does that chapter unfold?
• Carol: it unfolds as you’d expect, the spotted and striped cats talk a better line–“we’re under new management” is their new slogan–but in the end it’s the same old thing. Oil prices are down and we (who?) need to exercise “caution” and “find new efficiencies in government” in order to balance the budget. Funnily enough, when WR candidate John Fletcher said the WR would pay for schools and hospitals by eliminating “waste” in the system, Gord Dirks, the newly elected education minister, said if the WR expected to balance the budget by eliminating waste they were in “financial fairyland”. And yet, here we are, in PC financial fairyland “looking at different efficiencies throughout the government of Alberta…to make sure that we have a blananced budget moving forward” (Finance Minister Robin Campbell in the Herald Nov 19, 2014 A4).
5. david swann says:
Another insightful and thoughtful brief and excellent public commentary- thankyou! I wonder what Rev. Tommy or Dr Haidt would say about the 1000 students at MRU, many of whom had the opportunity to vote for the first time, on the main floor of their west residence and chose not to. While we would all benefit from a proper study of these young people my assumption is that a third reason for voting against one’s self-interest (in this case, not voting at all!) is the deep (unconscious?) belief – especially after 43 years in a one-party state – that their opinion will not make a difference. This attitude of powerlessness, not far from despair, should be a strong call to re-commit ourselves to revive Alberta’s democracy.
• Carlos Beca says:
David I could not agree more on the attitude of powerlessness that you mention. It really does not surprise me, I just have to look at my frustration trying to stay with the absurd that is going on in our society. What really amazes me is the study after study and the question after question on radio, TV and Facebook and whatever else we use these days, questioning why? Why is our young people so distant from political/social issues? The question should be why are we trying to con young people into accepting the garbage we have created. I do no believe for a second that politicians and some academics do not know the real reason. This attitude of surprise is in itself part of the total disconnection from real democratic values and in the case of some politicians, it is total dishonesty and lack of ethics. The so called Progressive parties in Alberta are not any better. Their inability to work together for the benefit of the public is mind boggling. They are incapable of putting aside their egos in order to save this province from total ruin.
• David and Carlos: you raise two vexing questions.
Why don’t the young vote? They say it’s because we don’t talk to them about issues that interest them. So David and I went to where they live (literally). We knocked on hundreds of doors at Mount Royal University and talked to the students about out of control tuition fees–and yet only a handful came out on election day. While it was good to see 300 students stage a protest on the steps of the Legislature on Monday, a vote for change would have been much more effective.
Powerlessness? What difference does it make? This is something I heard on the doors. It came out as cynicism and distrust of anyone in politics and frustration with all the progressive candidates because they couldn’t find a way to merge or at the very least cooperate in order to stop vote splitting, If we could solve the second concern by finding a way to speak with ONE progressive voice we’d be able to put progressives into government and demonstrate that not all politicians are in it for themselves. This is a huge challenge.
• Ah yes, it would be helpful if there were valid studies on the reasons for poor voter turnout among university students. I agree David that post-secondary students’ voter apathy is partially due to 43 years of a one-party state. As Susan also notes, many students are frustrated with progressive candidates’ and their party leaders’ unwillingness to unite. That would require them to act on the values they promote—open-minded cooperation that could finally win an election and allow them to implement the values that built this country. But they won’t do that. As such, it seems most Liberal, NDP, and Alberta Party politicians value rugged individualism and survival of the fittest (conservative values) more than equality of opportunity and strong social programs (liberal values). If so, they are “in it for themselves.” From a psychological perspective, I think it’s worth considering that university students are at an age when forming meaningful relationships and establishing a successful career path is their main focus, even if it means taking on heavy debt burdens. This is normal in a reasonably stable country, for as one famous economist said, class-consciousness only emerges when enough people suffer. Looks like we’re waiting for that to happen. A united centrist party could do that better than 3 parties vying to outdo each other at the expense of the broader community. It could even put an end to students’ cynical, frustrated, apathetic view of Alberta politics. And lets not forget their parents! And grandparents! I digress…
• Correction–meant to say that a united centrist party could do better than 3 parties…
• Judy, as oil prices continue to hover in the low $70 range I wonder whether Alberta is on the cusp of an era where enough people suffer (to use the words of the economist) to allow class consciousness to emerge. With all his talk about this not being “business as usual” Prentice is making it clear that bad times are just around the corner. Given that Ralph and his successors already burned the furniture, there’s not much more left, is there.
• Carlos Beca says:
Could not agree more Judy. The Liberals, the NDP and the Alberta Party would do exactly the same thing these guys are doing. They are in for themselves not for the province and its people. Any real democrat understands this with his/her eyes closed. The problem is that we have lost the sense of democracy and so the situation becomes a crisis and the paralysis we are already witnessing everywhere.
There is no real difference between the parties because once they get there they just do not have any resilience to do what they preach and the reason being that deeply inside they are all the same. Power, money, prestige, elitism and nothing else.
• Carlos, while I’d agree that many politicians, particularly those who’ve been in power for a long time, are corrupt. I’ve had the privilege of getting to know a handful who are doing it because they actually care about democracy and improving the quality of life for those they represent. The PC government fired Dr David Swann from his job as a public health officer for daring to speak out in favour of the Kyoto Protocol. It was only then that David entered politics. He felt strongly that the government should not be allowed to interfere with freedom of speech. Earlier this week when David responded to the Throne Speech in the Leg, he made a number of good points including the comment that if the PC government truly wanted to do the right thing it would move to proportional representation.
The difficulty these “good” politicians face is that they’re powerless to make meaningful change when confronted by a corrupt majority government elected under a “first past the post” system. The fact that the majority of citizens think politics is boring, irrelevant and a waste of time only perpetuates this broken system.
• Carlos Beca says:
Susan I fully agree with you and I know well what David has done and I know that generalizing is obviously wrong. On the other hand I ask the question ‘When was the last time you remember a decent politician in a position of being able to make a difference?’ – The last one I remember with any connection whatsoever with the citizens of this province and obvious crook was Peter Lougheed. So if that is the case, then we have to ask a second question – How can we ever accomplish anything with this kind of government – 1 in half century?
So then we come to the final question – What kind of system brings about this degree of failure and how do we fix this situation?
Well I can at least answer this last question – This is an anti-democratic system that works for the very few.
I know that some people reading this or other comments I made can think ‘Well why don’t you run and fix it?’ – I can answer that one as well. I do not think this system is fixable. The vested interests and the controls are just too entrenched for a possible change. This is the reason why we are stuck.
• Carlos Beca says:
I apologize for the mistake – I just realized this morning that instead of having ‘ Peter Lougheed ,…. and obviously not a crook…’ I, after a stupid editing left exactly what I did not mean. I deeply apologize for this. Sometimes these things happen.
• That’s OK Carlos, I understood what you were getting at. Of course you’re absolutely right about the perils of living in a one-party state, particularly one that is also totally dependent on one industry. Reminds me of song Sixteen Tons — the story of a coal miner who is essentially an indentured servant to the coal company. I first heard it as a child and I still remember the chorus. Here it is:
“You loaded sixteen tons and what do you get?
Another day older and deeper in debt
Saint Peter don’t you call me ’cause I can’t go
I owe my soul to the company store.”
And yes, our existing political system is tilted in favour of the status quo, but unlike the coal miner Albertans do have a choice. For some reason they’re too complacent (or afraid) to take it.
6. Jim Lees says:
I never did trust cats…..
Sent from my iPad
7. GoinFawr says:
One of my favourite T Douglas parables Susan. Have a cartoon that summarizes:
8. Pingback: | On The Soapbox: Susan Wright on why people vote against their self-interest
9. Rose MacKenzie-Kirkwood says:
I must admit I do not know enough about politics to participate in a political debate but I am smart enough to figure out that “better the enemy you know” than the “enemy you don’t” is stupid. Unfortunately people seem to go with the first. I am just going to cross my fingers and hope that people will eventually learn and decide that change is good.
Leave a Reply
You are commenting using your account. Log Out / Change )
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
|
Traffic barricades can be used to redirect or restrict traffic in areas of highway construction or repair. They are typically made from wood, steel, plastic, fiberglass, or a combination of these materials. Many manufacturers have switched to the use of recycled materials in both the supporting frame and rails of the barricades. EPA's designation covers only Types I and II traffic barricades.
EPA's Recovered Materials Advisory Notice (RMAN) recommends recycled-content levels for purchasing traffic barricades as shown in the table below.
EPA's Recommended Recovered Materials Content Levels
for Traffic Barricades (Types I and II) ¹
|Material||Postconsumer Content (%)||Total Recovered Materials Content (%)|
(HDPE, LDPE, PET)
1The recommended materials content levels for steel in this table reflect the fact that the designated items can be made from steel manufactured in either a Basic Oxygen Furnace (BOF) or an Electric Arc Furnace (EAF). Steel from the BDF process contains 25-30% total recovered materials, of which 16% is postconsumer steel. Steel from the EAF process contains a total of 100% recovered steel, of which 67% is postconsumer.
of Manufacturers and Suppliers
This database identifies manufacturers and suppliers of traffic barricades containing recovered materials.
Buy-Recycled Series: Transportation Products (PDF) (7 pp, 89K, About PDF)
This fact sheet highlights the transportation products designated in the CPG, including traffic barricades, and includes case studies, recommended recovered-content levels, and a list of resources.
Technical Background Documents
These background documents include EPA's product research on recovered-content traffic barricades as well as a more detailed overview of the history and regulatory requirements of the CPG process.
|
University of Michigan (U-M) scientists have made an important step toward what could become the first vaccine in the U.S. to prevent urinary tract infections, if the robust immunity achieved in mice can be reproduced in humans. The findings are published September 18 in the open-access journal PLoS Pathogens.
Urinary tract infections (UTIs) affect 53 percent of women and 14 percent of men at least once in their lives. These infections lead to lost work time and 6.8 million medical provider's office visits, 1.3 million emergency room visits and 245,000 hospitalizations a year, with an annual cost of $2.4 billion in the United States.
To help combat this common health issue, the U-M scientists used a novel systematic approach, combining bioinformatics, genomics and proteomics, to look for key parts of the bacterium, Escherichia coli, that could be used in a vaccine to elicit an effective immune response. The team, led by Dr. Harry L.T. Mobley, Ph.D., screened 5,379 possible bacterial proteins and identified three strong candidates to use in a vaccine to prime the body to fight E. coli, the cause of most uncomplicated urinary tract infections. The vaccine prevented infection and produced key types of immunity when tested in mice.
Scientists have attempted to develop a vaccine for UTIs over the past two decades. This latest potential vaccine has features that may better its chances of success. It alerts the immune system to iron receptors on the surface of bacteria that perform a critical function allowing infection to spread. Administered in the nose, it induces an immune response in the body's mucosa, a first line of defense against invading pathogens. The response, also produced in mucosal tissue in the urinary tract, should help the body fight infection where it starts.
Mobley's team is currently testing more strains of E. coli obtained from women treated at U-M. Most of the strains produce the same iron-related proteins that cthe vaccine targets, an encouraging sign that the vaccine could work against many urinary tract infections. Mobley is seeking partners in clinical research to move the vaccine forward into a phase 1 trial in humans. If successful, this vaccine would take several more years to reach the market.
FINANCIAL DISCLOSURE: This work has been funded by Public Health Service Grant AI043363 from the National Institutes of Health. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
COMPETING INTERESTS: The authors have declared that no competing interests exist.
PLEASE ADD THIS LINK TO THE PUBLISHED ARTICLE IN ONLINE VERSIONS OF YOUR REPORT: http://dx.plos.org/10.1371/journal.ppat.1000586 (link will go live upon embargo lift)
CITATION: Alteri CJ, Hagan EC, Sivick KE, Smith SN, Mobley HLT (2009) Mucosal Immunization with Iron Receptor Antigens Protects against Urinary Tract Infection. PLoS Pathog 5(9): e1000586. doi:10.1371/journal.ppat.1000586
Anne Rueter, [email protected]
Nicole Fawcett, [email protected]
This press release refers to an upcoming article in PLoS Pathogens. The release is provided by the article authors and their institution. Any opinions expressed in these releases or articles are the personal views of the journal staff and/or article contributors, and do not necessarily represent the views or policies of PLoS. PLoS expressly disclaims any and all warranties and liability in connection with the information found in the releases and articles and your use of such information.
About PLoS Pathogens
PLoS Pathogens (www.plospathogens.org) publishes outstanding original articles that significantly advance the understanding of pathogens and how they interact with their host organisms. All works published in PLoS Pathogens are open access. Everything is immediately available subject only to the condition that the original authorship and source are properly attributed. Copyright is retained by the authors. The Public Library of Science uses the Creative Commons Attribution License.
About the Public Library of Science
The Public Library of Science (PLoS) is a non-profit organization of scientists and physicians committed to making the world's scientific and medical literature a freely available public resource. For more information, visit http://www.plos.org.
AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.
|
#include "chunk.hpp"
#include "envchunk.hpp"
//macro for helping to force inclusion of chunks when using libraries
FORCE_CHUNK_INCLUDE_IMPLEMENT(envchunk)
// Class Environment_Data_Chunk functions
RIF_IMPLEMENT_DYNCREATE("REBENVDT",Environment_Data_Chunk)
// constructor from buffer
LOCKABLE_CHUNK_WITH_CHILDREN_LOADER("REBENVDT",Environment_Data_Chunk)
/*
Children for Enviornment_Data_Chunk :
"ENVSDSCL" Environment_Scale_Chunk
"GAMEMODE" Environment_Game_Mode_Chunk
"ENVPALET" Environment_Palette_Chunk
"ENVTXLIT" Environment_TLT_Chunk
"TLTCONFG" TLT_Config_Chunk
"CLRLOOKP" Coloured_Polygons_Lookup_Chunk
"MATCHIMG" Matching_Images_Chunk
"BMPNAMES" Global_BMP_Name_Chunk
"BMNAMVER" BMP_Names_Version_Chunk
"BMNAMEXT" BMP_Names_ExtraData_Chunk
"RIFFNAME" RIF_Name_Chunk
"ENDTHEAD" Environment_Data_Header_Chunk
"LIGHTSET" Light_Set_Chunk
"PRSETPAL" Preset_Palette_Chunk
"SPECLOBJ" Special_Objects_Chunk
"AVPEXSTR" AVP_External_Strategy_Chunk
"BMPMD5ID" Bitmap_MD5_Chunk
"GLOGENDC" Global_Generator_Data_Chunk
"FRAGTYPE" Fragment_Type_Chunk
"ENVACOUS" Environment_Acoustics_Chunk
"AVPENVIR" AVP_Environment_Settings_Chunk
"SOUNDDIR" Sound_Directory_Chunk
"RANTEXID" Random_Texture_ID_Chunk
*/
// empty constructor
Environment_Data_Chunk::Environment_Data_Chunk (Chunk_With_Children * parent)
:Lockable_Chunk_With_Children (parent, "REBENVDT")
{
// as necessary, generated automatically
new Environment_Data_Header_Chunk (this);
}
BOOL Environment_Data_Chunk::file_equals (HANDLE & /*rif_file*/)
{
return(TRUE);
}
Environment_Data_Header_Chunk * Environment_Data_Chunk::get_header()
{
return (Environment_Data_Header_Chunk *) this->lookup_single_child ("ENDTHEAD");
}
const char * Environment_Data_Chunk::get_head_id()
{
Environment_Data_Header_Chunk * hdptr = get_header();
if (!hdptr) return (0);
return(hdptr->identifier);
}
void Environment_Data_Chunk::set_lock_user (char * user)
{
Environment_Data_Header_Chunk * hdptr = get_header();
if (!hdptr) return;
strncpy (hdptr->lock_user, user,16);
hdptr->lock_user[16] = 0;
}
void Environment_Data_Chunk::post_input_processing()
{
if (get_header())
if (get_header()->flags & GENERAL_FLAG_LOCKED)
external_lock = TRUE;
Chunk_With_Children::post_input_processing();
}
///////////////////////////////////////
// Class Environment_Data_Header_Chunk functions
RIF_IMPLEMENT_DYNCREATE("ENDTHEAD",Environment_Data_Header_Chunk)
// from buffer
Environment_Data_Header_Chunk::Environment_Data_Header_Chunk (Chunk_With_Children * parent, const char * hdata, size_t /*hsize*/)
: Chunk (parent, "ENDTHEAD"),
flags (0), version_no (0)
{
flags = *((int *) hdata);
strncpy (lock_user, (hdata + 4), 16);
lock_user[16] = '\0';
version_no = *((int *) (hdata + 20));
}
BOOL Environment_Data_Header_Chunk::output_chunk (HANDLE & hand)
{
unsigned long junk;
BOOL ok;
char * data_block;
data_block = make_data_block_from_chunk();
ok = WriteFile (hand, (long *) data_block, (unsigned long) chunk_size, &junk, 0);
delete [] data_block;
if (!ok) return FALSE;
return TRUE;
}
void Environment_Data_Header_Chunk::fill_data_block ( char * data_start)
{
strncpy (data_start, identifier, 8);
data_start += 8;
*((int *) data_start) = chunk_size;
data_start += 4;
*((int *) data_start) = flags;
strncpy ((data_start + 4), lock_user, 16);
*((int *) (data_start+20)) = version_no;
}
void Environment_Data_Header_Chunk::prepare_for_output()
{
version_no ++;
}
///////////////////////////////////////
// Class Environment_Scale_Chunk functions
RIF_IMPLEMENT_DYNCREATE("ENVSDSCL",Environment_Scale_Chunk)
void Environment_Scale_Chunk::fill_data_block ( char * data_start)
{
strncpy (data_start, identifier, 8);
data_start += 8;
*((int *) data_start) = chunk_size;
data_start += 4;
*((double *) data_start) = scale;
}
///////////////////////////////////////
RIF_IMPLEMENT_DYNCREATE("ENVACOUS",Environment_Acoustics_Chunk)
Environment_Acoustics_Chunk::Environment_Acoustics_Chunk(Environment_Data_Chunk* parent)
:Chunk(parent,"ENVACOUS")
{
env_index=0;
reverb=2;
spare=0;
}
Environment_Acoustics_Chunk::Environment_Acoustics_Chunk(Chunk_With_Children* parent,const char* data,size_t)
:Chunk(parent,"ENVACOUS")
{
env_index=*(int*)data;
data+=4;
reverb=*(float*)data;
data+=4;
spare=*(int*)data;
}
void Environment_Acoustics_Chunk::fill_data_block(char* data_start)
{
strncpy (data_start, identifier, 8);
data_start += 8;
*((int *) data_start) = chunk_size;
data_start += 4;
*(int*)data_start=env_index;
data_start+=4;
*(float*)data_start=reverb;
data_start+=4;
*(int*)data_start=spare;
}
///////////////////////////////////////
RIF_IMPLEMENT_DYNCREATE("SOUNDDIR",Sound_Directory_Chunk)
Sound_Directory_Chunk::Sound_Directory_Chunk(Environment_Data_Chunk* parent,const char* dir)
:Chunk(parent,"SOUNDDIR")
{
directory=new char[strlen(dir)+1];
strcpy(directory,dir);
}
Sound_Directory_Chunk::Sound_Directory_Chunk(Chunk_With_Children * const parent, const char* data, size_t const )
:Chunk(parent,"SOUNDDIR")
{
directory=new char[strlen(data)+1];
strcpy(directory,data);
}
Sound_Directory_Chunk::~Sound_Directory_Chunk()
{
delete [] directory;
}
void Sound_Directory_Chunk::fill_data_block(char* data_start)
{
strncpy (data_start, identifier, 8);
data_start += 8;
*((int *) data_start) = chunk_size;
data_start += 4;
strcpy(data_start,directory);
}
///////////////////////////////////////
/////////////////////Available shape set collections////////////////////////////////////////
RIF_IMPLEMENT_DYNCREATE("RANTEXID",Random_Texture_ID_Chunk)
Random_Texture_ID_Chunk::Random_Texture_ID_Chunk(Chunk_With_Children* parent,const char* _name)
:Chunk(parent,"RANTEXID")
{
name=new char[strlen(_name)+1];
strcpy(name,_name);
spare1=spare2=0;
}
Random_Texture_ID_Chunk::Random_Texture_ID_Chunk(Chunk_With_Children* parent,const char* data,size_t)
:Chunk(parent,"RANTEXID")
{
CHUNK_EXTRACT_STRING(name);
int num_types,type;
CHUNK_EXTRACT(num_types,int);
for(int i=0;i<num_types;i++)
{
CHUNK_EXTRACT(type,int);
random_types.add_entry(type);
}
CHUNK_EXTRACT(spare1,int);
CHUNK_EXTRACT(spare2,int);
}
Random_Texture_ID_Chunk::~Random_Texture_ID_Chunk()
{
delete [] name;
}
void Random_Texture_ID_Chunk::fill_data_block(char* data)
{
CHUNK_FILL_START
CHUNK_FILL_STRING(name)
CHUNK_FILL(random_types.size(),int);
for(LIF<int> rlif(&random_types);!rlif.done();rlif.next())
{
CHUNK_FILL(rlif(),int)
}
CHUNK_FILL(spare1,int)
CHUNK_FILL(spare2,int)
}
size_t Random_Texture_ID_Chunk::size_chunk()
{
chunk_size=12+12+random_types.size()*4;
chunk_size+=(strlen(name)+4)&~3;
return chunk_size;
}
///////////////////////////////////////
//Class Special_Objects_Chunk :
RIF_IMPLEMENT_DYNCREATE("SPECLOBJ",Special_Objects_Chunk)
CHUNK_WITH_CHILDREN_LOADER("SPECLOBJ",Special_Objects_Chunk)
/*
Children for Special_Objects_Chunk :
"AVPGENER" AVP_Generator_Chunk
"SOUNDOB2" Sound_Object_Chunk
"VIOBJECT" Virtual_Object_Chunk
"AVPCABLE" AVP_Power_Cable_Chunk
"AVPSTART" AVP_Player_Start_Chunk
"AVPPATH2" AVP_Path_Chunk
"AVPGENEX" AVP_Generator_Extra_Data_Chunk
"SOUNDEXD" Sound_Object_Extra_Data_Chunk
"PARGENER" AVP_Particle_Generator_Chunk
"PLACHIER" Placed_Hierarchy_Chunk
"CAMORIGN" Camera_Origin_Chunk
"AVPDECAL" AVP_Decal_Chunk
"R6WAYPNT" R6_Waypoint_Chunk
*/
|
#include "swift/Obfuscation/LayoutRenamer.h"
namespace swift {
namespace obfuscation {
LayoutNodeRenaming::LayoutNodeRenaming(xmlNode* Node,
const xmlChar* PropertyName,
const std::string ObfuscatedName)
: Node(Node),
PropertyName(PropertyName),
ObfuscatedName(ObfuscatedName) {};
BaseLayoutRenamingStrategy::BaseLayoutRenamingStrategy(xmlNode *RootNode)
: RootNode(RootNode) {}
xmlNode*
BaseLayoutRenamingStrategy::findNodeWithAttributeValue(
xmlNode *Node,
const xmlChar *AttributeName,
const xmlChar *AttributeValue,
const TraversalDirection TraversalDirection) {
if(Node == nullptr || AttributeName == nullptr) {
return nullptr;
}
for (xmlNode *CurrentNode = Node;
CurrentNode != nullptr;
CurrentNode = CurrentNode->next) {
if (CurrentNode->type == XML_ELEMENT_NODE) {
for (xmlAttr *CurrentAttribute = CurrentNode->properties;
CurrentAttribute != nullptr;
CurrentAttribute = CurrentAttribute->next) {
if(CurrentAttribute->type == XML_ATTRIBUTE_NODE) {
if(xmlStrcmp(CurrentAttribute->name, AttributeName) == 0){
// if AttributeValue == nullptr then it means that we're interested
// in finding only a Node with attribute that name is AttributeName
if(AttributeValue == nullptr) {
return CurrentNode;
} else {
// otherwise we need to pull attribute's value and compare it
// with AttributeValue that was passed as a parameter
xmlChar* value = xmlGetProp(CurrentNode, AttributeName);
if(xmlStrcmp(value, AttributeValue) == 0){
return CurrentNode;
}
}
}
}
}
}
// depending on the TraversalDirection we go down (children)
// or up (parent) of the document
xmlNode *NextNode = nullptr;
if(TraversalDirection == Up) {
NextNode = CurrentNode->parent;
} else if(TraversalDirection == Down) {
NextNode = CurrentNode->children;
}
if(NextNode != nullptr) {
xmlNode *Found = findNodeWithAttributeValue(NextNode,
AttributeName,
AttributeValue,
TraversalDirection);
if(Found != nullptr) {
return Found;
}
}
}
return nullptr;
}
class XCode9RenamingStrategy: public BaseLayoutRenamingStrategy {
private:
// Needed for type renaming
const xmlChar *
CustomClassAttributeName = reinterpret_cast<const xmlChar *>("customClass");
const xmlChar *
CustomModuleAttributeName = reinterpret_cast<const xmlChar *>("customModule");
// Needed for outlet renaming
const xmlChar *
OutletNodeName = reinterpret_cast<const xmlChar *>("outlet");
const xmlChar *
OutletPropertyAttributeName = reinterpret_cast<const xmlChar *>("property");
// Needed for action renaming
const xmlChar *
ActionNodeName = reinterpret_cast<const xmlChar *>("action");
const xmlChar *
ActionSelectorAttributeName = reinterpret_cast<const xmlChar *>("selector");
//General
const xmlChar *
IdAttributeName = reinterpret_cast<const xmlChar *>("id");
const xmlChar *
DestinationAttributeName = reinterpret_cast<const xmlChar *>("destination");
const xmlChar *
TargetAttributeName = reinterpret_cast<const xmlChar *>("target");
TargetRuntime TargetRuntime;
bool shouldRename(const SymbolRenaming &Symbol,
const std::string &CustomClass,
const std::string &CustomModule) {
return CustomModule.empty() || CustomModule == Symbol.Module;
}
void extractCustomClassAndModule(xmlNode *Node,
std::string &CustomClass,
std::string &CustomModule) {
for (xmlAttr *CurrentAttribute = Node->properties;
CurrentAttribute != nullptr;
CurrentAttribute = CurrentAttribute->next) {
if(CurrentAttribute->type == XML_ATTRIBUTE_NODE) {
if (xmlStrcmp(CurrentAttribute->name, CustomClassAttributeName) == 0) {
CustomClass = std::string(reinterpret_cast<const char *>(xmlGetProp(
Node,
CurrentAttribute->name)));
}
if (xmlStrcmp(CurrentAttribute->name, CustomModuleAttributeName) == 0) {
CustomModule = std::string(reinterpret_cast<const char *>(xmlGetProp(
Node,
CurrentAttribute->name)));
}
}
}
}
// Determines if a SymbolIdentifier contains given ClassName and
// ModuleName. It used to find proper SymbolRenaming for outlets and actions.
bool identifierContainsModuleAndClass(const std::string SymbolIdentifier,
const std::string ClassName,
const std::string ModuleName) {
auto HasClassName =
SymbolIdentifier.find("."+ClassName+".") != std::string::npos;
auto HasModuleName = ModuleName.empty() ||
(SymbolIdentifier.find("."+ModuleName+".") != std::string::npos);
return HasClassName && HasModuleName;
}
public:
XCode9RenamingStrategy(xmlNode *RootNode, enum TargetRuntime TargetRuntime)
: BaseLayoutRenamingStrategy(RootNode) {
this->TargetRuntime = TargetRuntime;
}
// Extracts Node information that is required to perform renaming.
// Layout files are xmls, it looks for a specific attributes
// such as "customClass" and retrieves their values.
// These values are then used to look up RenamedSymbols map.
// If a "customClass" value is present inside RenamedSymbols, then
// it means that this symbol was renamed in the source code in previous step
// and it should be renamed in layout file as well so it collects it in
// NodesToRename vector.
// "customModule" attribute is also taken into account - if it's present then
// it's value is compared with
// symbol's module value (the one found in RenamedSymbols) and
// if it's not present then we assume that it's inherited from target project.
void extractLayoutRenamingNodes(
xmlNode *Node,
const std::vector<SymbolRenaming> &RenamedSymbols,
std::vector<LayoutNodeRenaming> &NodesToRename) {
if(Node == nullptr){
return;
}
for (xmlNode *CurrentNode = Node;
CurrentNode != nullptr;
CurrentNode = CurrentNode->next) {
if (CurrentNode->type == XML_ELEMENT_NODE) {
auto CustomClassNode = extractCustomClassRenamingNode(
CurrentNode,
RenamedSymbols);
if (CustomClassNode.hasValue()) {
NodesToRename.push_back(CustomClassNode.getValue());
}
auto ActionNode = extractActionRenamingNode(CurrentNode,
RenamedSymbols);
if (ActionNode.hasValue()) {
NodesToRename.push_back(ActionNode.getValue());
}
auto OutletNode = extractOutletRenamingNode(CurrentNode,
RenamedSymbols);
if (OutletNode.hasValue()) {
NodesToRename.push_back(OutletNode.getValue());
}
}
xmlNode *ChildrenNode = CurrentNode->children;
if(ChildrenNode != nullptr) {
extractLayoutRenamingNodes(ChildrenNode, RenamedSymbols, NodesToRename);
}
}
}
llvm::Optional<LayoutNodeRenaming> extractCustomClassRenamingNode(
xmlNode *Node,
const std::vector<SymbolRenaming> &RenamedSymbols) {
std::string CustomClass;
std::string CustomModule;
extractCustomClassAndModule(Node, CustomClass, CustomModule);
if(!CustomClass.empty()) {
// Find SymbolRenaming for given CustomClass and perform renaming
for(auto SymbolRenaming: RenamedSymbols) {
if(SymbolRenaming.OriginalName == CustomClass) {
if(shouldRename(SymbolRenaming, CustomClass, CustomModule)) {
return LayoutNodeRenaming(Node,
CustomClassAttributeName,
SymbolRenaming.ObfuscatedName);
}
}
}
}
return llvm::None;
}
// actions look like this in xml for macos:
// <action selector="customAction:" target="XfG-lQ-9wD" id="UKD-iL-45N"/>
//
// and like this in xml for ios:
// <action selector="customAction:" destination="0Ct-JR-NLr" eventType="touchUpInside" id="s2s-A5-aG6"/>
//
// in order to obfuscate customAction the module name needs to be known
// it looks for a node which id attribute's value is equal to
// actions's node destination (or target if destination
// is not present) attribute value
// then it extracts CustomClass/CustomModule needed for check
// if customAction should be obfuscated
// it does the check and returns the node info that will be later renamed
llvm::Optional<LayoutNodeRenaming> extractActionRenamingNode(
xmlNode *Node,
const std::vector<SymbolRenaming> &RenamedSymbols) {
if (xmlStrcmp(Node->name, ActionNodeName) == 0) {
std::string DestinationOrTarget;
if(TargetRuntime == CocoaTouch) {
DestinationOrTarget = std::string(
reinterpret_cast<const char *>(xmlGetProp(Node,
DestinationAttributeName)));
} else if(TargetRuntime == Cocoa) {
DestinationOrTarget = std::string(reinterpret_cast<const char *>(
xmlGetProp(Node,
TargetAttributeName)));
}
// find node with which id attribute value == DestinationOrTarget
xmlNode *NodeWithDestinationAsId = findNodeWithAttributeValue(
RootNode,
IdAttributeName,
reinterpret_cast<const xmlChar *>
(DestinationOrTarget.c_str()),
TraversalDirection::Down);
if(NodeWithDestinationAsId != nullptr) {
std::string CustomClass;
std::string CustomModule;
// Try to extract CustomClass and CustomModule
extractCustomClassAndModule(
NodeWithDestinationAsId,
CustomClass,
CustomModule);
// Check if should rename and if yes then perform actual renaming
if(!CustomClass.empty()) {
std::string SelectorName = std::string(
reinterpret_cast<const char *>(
xmlGetProp(
Node,
ActionSelectorAttributeName)));
std::vector<std::string> SplittedSelName = split(SelectorName, ':');
if(!SplittedSelName.empty()) {
std::string SelectorFunctionName = SplittedSelName[0];
for(auto SymbolRenaming: RenamedSymbols) {
if(SymbolRenaming.OriginalName == SelectorFunctionName &&
identifierContainsModuleAndClass(SymbolRenaming.Identifier,
CustomClass,
CustomModule)) {
SelectorName.replace(0,
SymbolRenaming.OriginalName.length(),
SymbolRenaming.ObfuscatedName);
if(shouldRename(SymbolRenaming, CustomClass, CustomModule)) {
return LayoutNodeRenaming(
Node,
ActionSelectorAttributeName,
SelectorName);
}
}
}
}
}
}
}
return llvm::None;
}
// outlets look like this in xml:
// <outlet property="prop_name" destination="x0y-zc-UQE" id="IiG-Jc-DUb"/>
//
// in order to obfuscate prop_name the module name needs to be known
// so it looks for the closest parent which has CustomClass attribute
// then when it have CustomClass/CustomModule needed for check
// if prop_name should be obfuscated
// it does the check and returns the node info that will be later renamed
llvm::Optional<LayoutNodeRenaming> extractOutletRenamingNode(
xmlNode *Node,
const std::vector<SymbolRenaming> &RenamedSymbols) {
if (xmlStrcmp(Node->name, OutletNodeName) == 0) {
std::string CustomClass;
std::string CustomModule;
// Search for closest parent Node with custom class
xmlNode *Parent = findNodeWithAttributeValue(
Node,
CustomClassAttributeName,
nullptr,
TraversalDirection::Up);
if(Parent != nullptr) {
// Try to extract CustomClass and CustomModule
extractCustomClassAndModule(Parent, CustomClass, CustomModule);
// Check if should rename and if yes then perform actual renaming
if(!CustomClass.empty()) {
std::string PropertyName = std::string(
reinterpret_cast<const char *>(
xmlGetProp(
Node,
OutletPropertyAttributeName)));
for(auto SymbolRenaming: RenamedSymbols) {
if(SymbolRenaming.OriginalName == PropertyName &&
identifierContainsModuleAndClass(SymbolRenaming.Identifier,
CustomClass,
CustomModule)) {
if(shouldRename(SymbolRenaming, CustomClass, CustomModule)) {
return LayoutNodeRenaming(
Node,
OutletPropertyAttributeName,
SymbolRenaming.ObfuscatedName);
}
}
}
}
}
}
return llvm::None;
}
};
LayoutRenamer::LayoutRenamer(std::string FileName) {
this->FileName = FileName;
XmlDocument = xmlReadFile(FileName.c_str(),
/* encoding */ "UTF-8",
/* options */ 0);
}
LayoutRenamer::~LayoutRenamer() {
if(XmlDocument != nullptr) {
xmlFreeDoc(XmlDocument);
}
xmlCleanupParser();
}
// For now we support layout files with root node that looks like this:
// <document type="com.apple.InterfaceBuilder3.CocoaTouch.Storyboard.XIB" version="3.0" ... >
// where type can be Cocoa or CocoaTouch .XIB or .Storyboard.XIB
llvm::Expected<std::unique_ptr<BaseLayoutRenamingStrategy>>
LayoutRenamer::createRenamingStrategy(xmlNode *RootNode) {
const auto *RootNodeName = reinterpret_cast<const xmlChar *>("document");
if (xmlStrcmp(RootNode->name, RootNodeName) == 0) {
const auto *
RootNodeTypeAttributeName = reinterpret_cast<const xmlChar *>("type");
const auto *
RootNodeVersionAttributeName = reinterpret_cast<const xmlChar *>("version");
const auto *
TargetRuntimeAttributeName = reinterpret_cast<const xmlChar *>
("targetRuntime");
auto TypeAttributeValue = std::string(
reinterpret_cast<const char *>(xmlGetProp(
RootNode,
RootNodeTypeAttributeName)));
auto VersionAttributeValue = std::string(
reinterpret_cast<const char *>(xmlGetProp(
RootNode,
RootNodeVersionAttributeName)));
std::set<std::string> SupportedDocumentTypes = {
"com.apple.InterfaceBuilder3.CocoaTouch.Storyboard.XIB",
"com.apple.InterfaceBuilder3.Cocoa.Storyboard.XIB",
"com.apple.InterfaceBuilder3.CocoaTouch.XIB",
"com.apple.InterfaceBuilder3.Cocoa.XIB"
};
if(SupportedDocumentTypes.find(TypeAttributeValue) !=
SupportedDocumentTypes.end()
&& VersionAttributeValue == "3.0") {
// try to find out what the target platform is
xmlChar* TargetRuntimeValue = xmlGetProp(RootNode,
TargetRuntimeAttributeName);
TargetRuntime TargetRuntime = Undefined;
if(TargetRuntimeValue != nullptr) {
std::string TargetRuntimeValueStr = std::string(
reinterpret_cast<const char *>(
TargetRuntimeValue));
if (TargetRuntimeValueStr.find("CocoaTouch") != std::string::npos) {
TargetRuntime = CocoaTouch;
} else if (TargetRuntimeValueStr.find("Cocoa") != std::string::npos) {
TargetRuntime = Cocoa;
}
}
if(TargetRuntime == Undefined) {
return stringError("Could not parse target runtime in: " + FileName);
}
return llvm::make_unique<XCode9RenamingStrategy>(RootNode, TargetRuntime);
} else {
// Probably a new version of layout file came out
// and it should be handled separately.
// Create a new BaseLayoutRenamingStrategy implementation
// and update this method so it returns correct Strategy for specific
// version of the layout file.
return stringError("Unknown layout file version for layout: " + FileName);
}
} else {
return stringError("Unknown root node type in layout file: " + FileName);
}
}
llvm::Expected<std::vector<LayoutNodeRenaming>>
LayoutRenamer::extractLayoutRenamingNodes(
std::vector<SymbolRenaming> RenamedSymbols) {
std::vector<LayoutNodeRenaming> NodesToRename;
if (XmlDocument == nullptr) {
return stringError("Could not parse file: " + FileName);
}
xmlNode *RootNode = xmlDocGetRootElement(XmlDocument);
auto RenamingStrategyOrError = createRenamingStrategy(RootNode);
if (auto Error = RenamingStrategyOrError.takeError()) {
return std::move(Error);
}
auto RenamingStrategy = std::move(RenamingStrategyOrError.get());
RenamingStrategy->extractLayoutRenamingNodes(RootNode,
RenamedSymbols,
NodesToRename);
return NodesToRename;
}
void
LayoutRenamer::performRenaming(
const std::vector<LayoutNodeRenaming> LayoutNodesToRename,
std::string OutputPath) {
for (const auto &LayoutNodeToRename: LayoutNodesToRename) {
xmlSetProp(LayoutNodeToRename.Node,
LayoutNodeToRename.PropertyName,
reinterpret_cast<const xmlChar *>(
LayoutNodeToRename.ObfuscatedName.c_str()));
}
xmlSaveFileEnc(static_cast<const char *>(OutputPath.c_str()),
XmlDocument,
reinterpret_cast<const char *>(XmlDocument->encoding));
}
} //namespace obfuscation
} //namespace swift
|
#include <algorithm>
#include <iostream>
#include <math.h>
#include <thread>
#include <chrono>
#include <iterator>
#include <string>
#include <stdlib.h>
#include <stdio.h>
#include <vector>
#include <fstream>
#include <sstream>
#include <stdexcept>
#include <cerrno>
#include <cstring>
#include <dirent.h>
#include <time.h>
#include <unistd.h>
#include "constants.h"
/* include filesystem header for new directory iterator
https://stackoverflow.com/questions/45867379/why-does-gcc-not-seem-to-have-the-filesystem-standard-library
To link with the library you need to add -lstdc++fs to the command line.*/
#include <experimental/filesystem>
using namespace std;
class ProcessParser{
private:
std::ifstream stream;
public:
static string getCmd(string pid);
static vector<string> getPidList();
static std::string getVmSize(string pid);
static std::string getCpuPercent(string pid);
static long int getSysUpTime();
static std::string getProcUpTime(string pid);
static string getProcUser(string pid);
static vector<string> getSysCpuPercent(string coreNumber = "");
static float getSysRamPercent();
static string getSysKernelVersion();
static int getTotalThreads();
static int getTotalNumberOfProcesses();
static int getNumberOfRunningProcesses();
static string getOSName();
static std::string PrintCpuStats(std::vector<std::string> values1, std::vector<std::string>values2);
static bool isPidExisting(string pid);
static int getNumberOfCores();
static float get_sys_active_cpu_time(vector<string> values);
static float get_sys_idle_cpu_time(vector<string>values);
};
// TODO: Define all of the above functions below:
bool ProcessParser::isPidExisting(string pid){
std::ifstream fstream;
try{
Util::getStream(Path::basePath() + pid, fstream);
} catch (std::string &exp) {
std::cout << exp << std::endl;
}
return true;
}
string ProcessParser::getCmd(string pid){
std::ifstream fstream;
try{
Util::getStream(Path::basePath() + pid + Path::cmdPath(), fstream);
} catch (std::string &exp) {
std::cout << exp << std::endl;
}
string cmd;
std::getline(fstream,cmd);
return cmd;
}
// RE-implement this function using the filesystem lib
vector<string> ProcessParser::getPidList()
{
DIR* dir;
// Basically, we are scanning /proc dir for all directories with numbers as their names
// If we get valid check we store dir names in vector as list of machine pids
vector<string> container;
if(!(dir = opendir("/proc")))
throw std::runtime_error(std::strerror(errno));
while (dirent* dirp = readdir(dir)) {
// is this a directory?
if(dirp->d_type != DT_DIR)
continue;
// Is every character of the name a digit?
if (all_of(dirp->d_name, dirp->d_name + std::strlen(dirp->d_name), [](char c){ return std::isdigit(c); })) {
container.push_back(dirp->d_name);
}
}
//Validating process of directory closing
if(closedir(dir))
throw std::runtime_error(std::strerror(errno));
return container;
}
std::string ProcessParser::getVmSize(string pid){
std::ifstream fstream;
float result;
string line;
string inputString = "VmData:";
try{
Util::getStream(Path::basePath() + pid + Path::statusPath(), fstream);
} catch (std::string &exp) {
std::cout << exp << std::endl;
}
while (getline(fstream,line))
{
if(line.compare(0,inputString.size(),inputString) == 0){
istringstream str(line);
istream_iterator<string> beg(str),end;
vector<string> str_val (beg,end);
// convert Kb to Gb
result = (stof(str_val[1])/float(1024));
break;
}
}
return to_string(result);
}
std::string ProcessParser::getCpuPercent(string pid){
std::ifstream fstream;
try{
Util::getStream(Path::basePath() + pid + "/" + Path::statPath(), fstream);
} catch (std::string &exp) {
std::cout << exp << std::endl;
}
string line;
float result;
getline(fstream,line);
istringstream str(line);
istream_iterator<string> beg(str),end;
vector<string> vlaues(beg,end);
float utime = stof(ProcessParser::getProcUpTime(pid));
float uptime = ProcessParser::getSysUpTime();
float stime = stof(vlaues[14]);
float cutime = stof(vlaues[15]);
float cstime = stof(vlaues[16]);
float starttime = stof(vlaues[21]);
float frequency = sysconf(_SC_CLK_TCK);
float total_time = utime + stime + cutime + cstime;
float seconds = uptime - (starttime/frequency);
result = 100.0 * ((total_time/frequency)/seconds);
return to_string(result);
}
long int ProcessParser::getSysUpTime(){
std::ifstream fstream;
try{
Util::getStream(Path::basePath() + Path::upTimePath(), fstream);
} catch (std::string &exp) {
std::cout << exp << std::endl;
}
string upTime;
std::getline(fstream,upTime);
istringstream str(upTime);
istream_iterator<string> beg(str),end;
vector<string> vlaues(beg,end);
return stol(vlaues[0]);
}
std::string ProcessParser::getProcUpTime(string pid){
std::ifstream fstream;
try{
Util::getStream(Path::basePath() + pid + "/" + Path::statPath(), fstream);
} catch (std::string &exp) {
std::cout << exp << std::endl;
}
string line;
float result;
getline(fstream,line);
istringstream str(line);
istream_iterator<string> beg(str),end;
vector<string> vlaues(beg,end);
result = stof(vlaues[13])/sysconf(_SC_CLK_TCK);
return to_string(result);
}
string ProcessParser::getProcUser(string pid){
string line;
string name = "Uid:";
string result;
std::ifstream fstream;
try{
Util::getStream(Path::basePath() + pid + Path::statusPath(), fstream);
} catch (std::string &exp) {
std::cout << exp << std::endl;
}
// reading Uid line from status file
while (std::getline(fstream, line)) {
if (line.compare(0, name.size(),name) == 0) {
istringstream buf(line);
istream_iterator<string> beg(buf), end;
vector<string> values(beg, end);
result = values[1];
break;
}
}
// Finding equivilant user to Uid in /etc/passwd file
std::ifstream userStream;
try{
Util::getStream("/etc/passwd", userStream);
} catch (std::string &exp) {
std::cout << exp << std::endl;
}
// Searching for name of the user with selected UID
name =("x:" + result);
while (std::getline(userStream, line)) {
if (line.find(name) != std::string::npos) {
result = line.substr(0, line.find(":"));
return result;
}
}
return "";
}
vector<string> ProcessParser::getSysCpuPercent(string coreNumber){
string line;
string name = "cpu" + coreNumber;
std::ifstream fstream;
try{
Util::getStream(Path::basePath() + Path::statPath(), fstream);
} catch (std::string &exp) {
std::cout << exp << std::endl;
}
while (std::getline(fstream, line)) {
if (line.compare(0, name.size(),name) == 0) {
istringstream buf(line);
istream_iterator<string> beg(buf), end;
vector<string> values(beg, end);
// set of cpu data active and idle times;
return values;
}
}
return (vector<string>());
}
float ProcessParser::getSysRamPercent(){
string line;
string name0 = "MemTotal:";
string name1 = "MemFree:";
string name2 = "MemAvailable:";
string name3 = "Buffers:";
float totalMem = 0.0;
float memFree = 0.0;
float memAval = 0.0;
float buffers = 0.0;
std::ifstream fstream;
try{
Util::getStream(Path::basePath() + Path::memInfoPath(), fstream);
} catch (std::string &exp) {
std::cout << exp << std::endl;
}
while (std::getline(fstream, line))
{
// if (totalMem != 0 && memFree != 0)
// break;
if(line.compare(0, name0.size(),name0) == 0){
istringstream buf(line);
istream_iterator<string> beg(buf), end;
vector<string> values(beg, end);
totalMem = stof(values[1]);
}
if(line.compare(0, name1.size(),name1) == 0){
istringstream buf(line);
istream_iterator<string> beg(buf), end;
vector<string> values(beg, end);
memFree = stof(values[1]);
}
if(line.compare(0, name2.size(),name2) == 0){
istringstream buf(line);
istream_iterator<string> beg(buf), end;
vector<string> values(beg, end);
memAval = stof(values[1]);
}
if(line.compare(0, name3.size(),name3) == 0){
istringstream buf(line);
istream_iterator<string> beg(buf), end;
vector<string> values(beg, end);
buffers = stof(values[1]);
// I do not like this :(
//break;
}
}
return float(100.0 * (1 - (memFree/(memAval-buffers))));
}
string ProcessParser::getSysKernelVersion(){
std::ifstream fstream;
try{
Util::getStream(Path::basePath() + Path::versionPath(), fstream);
} catch (std::string &exp) {
std::cout << exp << std::endl;
}
string version = "Linux";
string line;
while(std::getline(fstream,line)){
if(line.compare(0, version.size(), version) == 0){
istringstream buf(line);
istream_iterator<string> beg(buf), end;
vector<string> values(beg, end);
return values[2];
}
}
return "";
}
int ProcessParser::getTotalThreads(){
string line;
int result = 0;
string name = "Threads:";
vector<string> _list = ProcessParser::getPidList();
for (int i=0 ; i<_list.size();i++) {
string pid = _list[i];
//getting every process and reading their number of their threads
std::ifstream fstream ;
try{
Util::getStream(Path::basePath() + pid + Path::statusPath(), fstream);
} catch (std::string &exp) {
std::cout << exp << std::endl;
}
while (std::getline(fstream, line)) {
if (line.compare(0, name.size(), name) == 0) {
istringstream buf(line);
istream_iterator<string> beg(buf), end;
vector<string> values(beg, end);
result += stoi(values[1]);
break;
}
}
}
return result;
}
int ProcessParser::getTotalNumberOfProcesses(){
string line;
int result = 0;
string name = "processes";
ifstream fstream ;
try{
Util::getStream(Path::basePath() + Path::statPath(), fstream);
} catch (std::string &exp) {
std::cout << exp << std::endl;
}
while (std::getline(fstream, line)) {
if (line.compare(0, name.size(), name) == 0) {
istringstream buf(line);
istream_iterator<string> beg(buf), end;
vector<string> values(beg, end);
result += stoi(values[1]);
break;
}
}
return result;
}
int ProcessParser::getNumberOfRunningProcesses(){
string line;
int result = 0;
string name = "procs_running";
ifstream fstream ;
try{
Util::getStream(Path::basePath() + Path::statPath(), fstream);
} catch (std::string &exp) {
std::cout << exp << std::endl;
}
while (std::getline(fstream, line)) {
if (line.compare(0, name.size(), name) == 0) {
istringstream buf(line);
istream_iterator<string> beg(buf), end;
vector<string> values(beg, end);
result += stoi(values[1]);
break;
}
}
return result;
}
string ProcessParser::getOSName(){
std::ifstream fstream;
try{
// Util::getStream(Path::basePath() + Path::osNamePath(), fstream);
// Reading os name from a different source
Util::getStream("/etc/os-release", fstream);
} catch (std::string &exp) {
std::cout << exp << std::endl;
}
string name = "PRETTY_NAME=";
string line;
while(std::getline(fstream,line)){
if(line.compare(0, name.size(), name) == 0){
// istringstream buf(line);
// istream_iterator<string> beg(buf), end;
// vector<string> values(beg, end);
// return values[1] + values[2];
std::size_t found = line.find("=");
found++;
string result = line.substr(found);
result.erase(std::remove(result.begin(), result.end(), '"'), result.end());
return result;
}
}
return "";
}
std::string ProcessParser::PrintCpuStats(std::vector<std::string> values1, std::vector<std::string>values2){
float activeTime = ProcessParser::get_sys_active_cpu_time(values2) - ProcessParser::get_sys_active_cpu_time(values1);
float idleTime = ProcessParser::get_sys_idle_cpu_time(values2) - ProcessParser::get_sys_idle_cpu_time(values1);
float totalTime = activeTime + idleTime;
float result = 100.0 * (activeTime/totalTime);
std::string time = to_string(result);
return time;
}
int ProcessParser::getNumberOfCores(){
string line;
string name = "cpu cores";
std::ifstream fstream;
try{
Util::getStream(Path::basePath() + "cpuinfo", fstream);
} catch (std::string &exp) {
std::cout << exp << std::endl;
}
while (std::getline(fstream, line)) {
if (line.compare(0, name.size(),name) == 0) {
istringstream buf(line);
istream_iterator<string> beg(buf), end;
vector<string> values(beg, end);
return stoi(values[3]);
}
}
return 0;
}
/* These functions for calculating active and idle time are a direct extension of the system CPU percentage.
They sort and categorize a newly created string vector,
which contains parsed raw data from file. Because most of the data is recorded as time,
we are selecting and summing all active and idle time. */
float ProcessParser::get_sys_active_cpu_time(vector<string> values)
{
return (stof(values[S_USER]) +
stof(values[S_NICE]) +
stof(values[S_SYSTEM]) +
stof(values[S_IRQ]) +
stof(values[S_SOFTIRQ]) +
stof(values[S_STEAL]) +
stof(values[S_GUEST]) +
stof(values[S_GUEST_NICE]));
}
float ProcessParser::get_sys_idle_cpu_time(vector<string>values)
{
return (stof(values[S_IDLE]) + stof(values[S_IOWAIT]));
}
|
The confidence factor
Cal Thomas
Posted: Nov 15, 2002 12:00 AM
There is a difference between cockiness and confidence. The one is a character flaw in prideful men, and pride, as the Proverb warns,"goes before destruction, a haughty spirit before a fall" (Proverbs 16:18). The other is an essential ingredient in a leader who not only believes in himself and the worth of his ideas but also that the people he leads will follow him if they know where he is going and why he wants to take us there. President Bush has this new confidence, which increasingly resembles the bold optimism of Ronald Reagan. It has been enhanced by the election results, but it was forged in the adversity of the post-9/11 world, which is entirely different from the one that existed when he took office. This new confidence was seen in full force as he addressed District of Columbia police officers and firefighters last Tuesday (Nov. 12). After praising the reelected Democratic mayor of Washington, D.C., Anthony Williams, for doing a"great job" and saying he"appreciated" the service of liberal Rep. Eleanor Holmes Norton (D-D.C.) (confident people can afford to be charitable), he thanked the emergency personnel who keep Washington"buttoned up" so that"I feel safe living here." He praised at length the chief of police, Charles Ramsey, who became a national figure during the Chandra Levy-Gary Condit soap opera two summers ago. Probably none of those he mentioned voted for him (D.C. politics is heavily Democratic), but confidence does not require universal approval. The president made another pitch for a department of homeland security, which the lame-duck Congress is likely to give him. That's because Democrats seem less confident than they were in their preelection hubris. Reading the president's speech makes at least two impressions. One is that the president believes we are going to be attacked again, despite massive efforts to hunt down and root out the terrorists among us. That's partly because our immigration policy has been far too liberal, and we have allowed our enemies to live among us, even permitting them to become U.S. citizens. Citizenship has not changed their evil intent. "The enemy can strike us here at home," said the president, and he warned that"the old ways" of dealing with threats to this country are gone and that America itself is now"a battlefield." It wasn't a complaint so much as a warning against complacency. The second impression made by the speech is that the president hasn't changed his objectives. If anything, he has become more resolved. He pledged a strategy for"hunting these killers down one at a time" and said his post-Sept. 11"doctrines still exist." These include - his black or white statement -"you're either with us or with the enemy." He added,"There is no cave deep enough for these people to hide in .... There's no shadow of the world dark enough for them to kind of slither around in. We're after them, and it's going to take a while .... We're after them one person at a time. We owe that to the American people. We owe that to our children." Reminding us whom we're dealing with, the president said,"This is a war. (Sept. 11) is not a single, isolated incident. We are now in the first war of the 21st century. And it's a different kind of war than we're used to .... Part of the difference is that the battlefield is now here at home. It's also a war where the enemy doesn't show up with airplanes that they own, or tanks or ships. These are suiciders. These are cold-blooded killers." Part of a president's job is to warn the public of threats and then deal with them as best he can. Another part is to motivate and mobilize the country to be on the alert and to be co-combatants against those who would tear down what generations of us have built. President Bush is likely to get far more out of the new Congress than his detractors think possible. That will be due, in part, not only to his slender congressional majority but to a new confidence that will quickly infect his supporters as well as his opponents.
|
Add comment December 10th, 2012 Headsman
On this date in 1900, John Filip Nordlund was beheaded with Albert Dahlman‘s axe at Sweden’s Västerås County Jail.
The second-last person executed in Sweden (English Wikipedia entry | Swedish) was the author of an infamously fiendish murder spree aboard a ferry steamer crossing Lake Mälaren for Stockholm on the evening of May 16, 1900: shortly after the Prins Carl‘s departure from Arboga, Nordlund, armed with two revolvers and two blades, went on a rampage through the boat (Swedish link), shooting or stabbing everyone he saw.
The spree left five dead, including the ship’s captain, and several others wounded. Then Nordlund lowered a lifeboat into the water and rowed away with about 800 stolen kronor … and the opprobrium of the nation.
Nordlund stalks the Prins Carl, from this verse pdf (Swedish).
Police were able to track him from the descriptions of witnesses to a train station and arrest him the very next day. Their maniac would turn out to be a 25-year-old career thief, only released the month before from his latest prison stint.
Although captured trying to flee, Nordlund from the first projected resignation — even relief, writing his parents that he would be well rid of a society he had never felt part of. Certainly the sentence was in little doubt given the infamy of the crime (Nordlund was almost lynched after arrest), and the man made no attempt to defend himself or mitigate his actions in court, nor to seek mercy after conviction.
Nordlund was the third person executed in Sweden in 1900 alone, but there would be no more patients for Dahlman for a decade … until 1910, when Sweden conducted its first and only guillotining. The country has not carried out a death sentence since.
Besides being the penultimate executee in Swedish history, John Filip Nordlund is also the last man in Europe beheaded manually (rather than with Dr. Guillotin’s device) other than in Germany.
Also on this date
- 1852: Jose Forni, the first legal hanging in California
- 1718: Stede Bonnet, gentleman pirate
- 1541: Thomas Culpeper and Francis Dereham, the Queen's lovers
- 1796: Jose Leonardo Chirino, Venezuelan slave revolt leader
- 1937: Teido Kunizaki
|
Boomers' aging casts light on geriatrics shortage
AP News
Posted: Nov 05, 2011 11:42 AM
Boomers' aging casts light on geriatrics shortage
In this sleepy, riverside town in northeast Florida, 86-year-old Betty Wills sees the advertisements of obstetricians and gynecologists on the main road's billboards and has found specialists ranging from cardiologists to surgeons in the phone book.
But there's not a single geriatrician _ a doctor who specializes in treating the elderly _ in all of Putnam County, where a fifth of the county's 74,000 people are seniors.
"I looked," Wills said. "I didn't find one."
It's a nationwide shortage and it's going to get worse as the 70 million members of the baby-boom generation _ those now 46 to 65 _ reach their senior years over the next few decades.
The American Geriatrics Society says today there's roughly one geriatrician for every 2,600 people 75 and older. Without a drastic change in the number of doctors choosing the specialty, the ratio is projected to fall to one geriatrician for every 3,800 older Americans by 2030. Compare that to pediatricians: there is about 1 for every 1,300 Americans under 18.
Geriatricians, at their best, are medicine's unsung heroes. They understand how an older person's body and mind work differently. They listen more but are paid less than their peers. They have the skills to alleviate their patients' ailments and living fuller, more satisfied lives.
Though not every senior needs a geriatrician, their training often makes them the best equipped to respond when an older patient has multiple medical problems. Geriatricians have expertise in areas that general internists don't, including the changes in cognitive ability, mood, gait, balance and continence, as well as the effects of drugs on older individuals.
But with few doctors drawn to the field and some fleeing it, the disparity between the number of geriatricians and the population it serves is destined to grow even starker.
"We're an endangered species," said Dr. Rosanne Leipzig, a renowned geriatrician at Mount Sinai Medical Center in New York.
Just 56 percent of first-year fellowship slots in geriatrics were filled last academic year, according to a University of Cincinnati study, while the number physicians on staff at U.S. medical schools' geriatric programs has generally been trending downward.
Many young doctors aren't receiving even basic training in caring for older patients. Only 56 percent of medical students had clinical rotations in geriatrics in 2008, according to the study.
Various efforts around the country have aimed to increase both those choosing the geriatrics specialty and the level of training all doctors get in treating older patients.
The federal health overhaul law also includes a number of provisions aimed at increasing geriatric care. Last year, under the law, 85 grants totaling $29.5 million funded a range of geriatrics training programs for doctors, dentists, mental health professionals and other medical workers.
For now, though, the shortage continues.
"The shifting demographics is causing other primary care physicians to focus more on frail older adults but they do not have the training or experience to manage complex older adults with multiple chronic diseases," said Dr. Peter DeGolia, director of the Center for Geriatric Medicine at University Hospitals Case Medical Center in Cleveland.
Karen Roberto, director of the Center for Gerontology at Virginia Tech, said doctors who aren't trained in geriatrics might have a tendency to discount an older person's problems as normal symptoms of aging, when in fact they can be treated. She receives calls from people around the state looking for geriatricians, but oftentimes can't offer a recommendation.
"Going from specialist to specialist is not the answer," she said. "Older adults need providers with comprehensive knowledge of their problems and concerns."
For Wills, she moved with speed around the Edgar Johnson Senior Center, cooking lunch and sweeping the floor before her line dancing class began.
Wills joked about having outlived a number of her doctors, and how Jack Daniels sometimes is the best medicine. She wasn't sure a geriatrician would have all the answers, but she thought they might understand a woman of her age better than other doctors. She was unsuccessful finding one in her county.
"They depend on tests, they depend on machines, they depend on pills," she said. "Sometimes listening to you is better than hooking you up to machines."
|
Saturday, February 3, 2007
The Supremes
~Nearly two years ago the Supreme Court ruled on Granholm V. Heald, a case connected to wine that has been reported in the press as some sort of breakthrough. Unfortunately, the ruling was what a lot of the court’s rulings become: a narrow decision with a wide ambiguity running through it.
~In short, the Supreme Court justices said that no state can bar wine shipments direct to consumers from wineries in other states while allowing wine shipments direct to consumers from wineries within that state. To the judges, the legal choice a state has is either to allow every winery to ship direct to consumers within that particular state or to bar every winery from shipping direct to consumers.
~Giving wineries in your state a commercial advantage over wineries out of state is called protectionism.
~Either I’m an idiot, which is always a possibility, or the justices have a blind spot when it comes to alcohol, which is not so impossible, as I will show you later on. In any event, alcohol seems to be the only legal commodity that does not deserve the protection of the Commerce Clause found in Article 1, Section 8 of the US Constitution.
The Commerce Clause is a legal doctrine in U.S. Constitutional law that limits the power of states to legislate and impact interstate commerce. The U.S. Constitution reserves for Congress the exclusive power "To regulate Commerce with foreign Nations, and among the several States…”
~The above definition means that individual states are either excluded from, or limited in their ability to legislate on interstate commerce—the bench has often cited the Commerce Clause to restrict states from passing protectionist regulations and to apply undue burdens on interstate commerce.
~ Granholm V. Heald was seemingly about wine shipping. But what it has turned out to be about is the old guard being threatened and state legislators being bought. Not much new there, really.
~What the case should have been about was a conflict between the 21st Amendment to the Constitution and the Commerce Clause.
~The 21st Amendment to the Constitution is the only amendment to have repealed an earlier amendment (the 18th Amendment: Prohibition), which is not an easy thing to do since after an amendment is passed in Congress it must be ratified by the states. In this case, however, the states were handed a veritable cash cow; the real wonder is why any state had to think over ratification.
~ In its wisdom, Congress inserted into the 21st Amendment the right of individual states to regulate and control the sale and distribution of alcohol within state borders. By so doing, Congress opened a floodgate of confusion, not to mention corruption.
~State legislators were quick to see a bonanza revenue source; they imposed taxes, fees, stipulations, restrictions (which always lead to fines) and all kinds of social engineering, like Sunday “Blue Laws,” prescribed hours of operation, wine in or not in grocery stores, and so on.
~Most of all, the states were handed the freedom to create whatever system they wanted in order to provide or withhold access to alcohol. Mostly, the states built individual Byzantine empires known either as Liquor Control or Alcohol Control Boards. The boards are generally made up of political appointees, and you know what that means.
Some states opted for the three-tier system, which builds in not only a mandated middle-merchant, but also a sizable cost to the consumer and a quasi-private monopolized business that is heavily controlled by the state. In some states a retailer can buy certain wines from only one distributor--how's that for creating free competition?
Some states opted for state control. In other words, the state buys and sells alcohol. When the state is involved in commerce you have the potential for corruption, and in states that opted for this choice potential has usually become reality. Yet, at least these states did not offer the illusion that the three-tier states offered, that they were allowing competitive commerce to flourish.
Some states allowed localities to select whether or not they wanted an alcohol business in their community. This option allowed situations where a person could cross an intersection into another county and buy wine that could not be bought in the county across the street. Of course, crossing the street into the “dry” county carrying the wine in a bag might have left the person open to prosecution.
I'm sure there are other choices states could have made, but thinking any further about them gives me agita.
~Notice that through the maze of state legislative restrictions and controls, something American always got lost: free flow of commerce.
~Because the states have such power over the commerce of alcohol, the result of the recent Supreme Court ruling was not exactly an opening of wine shipments across state borders, as the always eager to recite a press release media often reported.
~What really happened is that the states created new legislation that opens shipping, but that usually makes it extremely difficult and more expensive. Imagine owning a small family winery and having to apply for fifty licenses—one for each state—in order to send wine direct to consumers in the rest of the country. That is only one costly hurdle. There are restrictions on volume, on cases shipped, and probably on the color of the box that the winery uses as packaging. I'm not being cynical; in some states a winery that submits certain tax reports on the wrong colored paper risks being cited in violation.
~As predicted, a flurry of court actions are popping up all over the USA—some exploit the glaring Supreme Court ambiguities and some want them fixed.
~In my opinion, the only thing that will definitively end the legal battles is a direct ruling by the court whether or not the 21st Amendment violates the Commerce Clause; if the court were to find that it is a violation, then a whole lot of things regarding access to wine are likely to change. But any winery willing to take on that fight has to be careful. It's not a given that a court ruling would be on the side of the Commerce Clause.
~When the Supreme Court ruled in the wine shipping case Justice Kennedy read the majority opinion. He made a fleeting reference to the Commerce Clause, calling alcohol a “special” commodity that may not deserve the same commercial protections as other legal products.
~Isn’t that great? Free and unfettered commerce is a constitutional right, but for wine it will be a right only so long as legal alcohol fits the majority morality of the Supreme Court justices or the Congress. You can’t ask for anything more beautifully hypocritical than that.
In my view, Congress should free alcohol from its unconstitutional chains.
On the other hand, Congress can pass an amendment to the constitution to make alcohol illegal—now there’s a novel approach…
Read on: commerce, 21st, 21st again, granholm, litigate,
Copyright, Thomas Pellechia
February, 2007. All Rights Reserved.
No comments:
Post a Comment
|
#include "CalibTracker/SiPixelConnectivity/interface/PixelToFEDAssociateFromAscii.h"
#include "DataFormats/SiPixelDetId/interface/PixelBarrelName.h"
#include "DataFormats/SiPixelDetId/interface/PixelEndcapName.h"
#include "FWCore/MessageLogger/interface/MessageLogger.h"
#include <ostream>
#include <fstream>
#include "FWCore/Utilities/interface/Exception.h"
using namespace std;
PixelToFEDAssociateFromAscii::PixelToFEDAssociateFromAscii(const string & fn) {
init(fn);
}
std::string PixelToFEDAssociateFromAscii::version() const
{
return theVersion;
}
int PixelToFEDAssociateFromAscii::operator()(const PixelModuleName & id) const
{
return id.isBarrel() ?
operator()(dynamic_cast<const PixelBarrelName & >(id)) :
operator()(dynamic_cast<const PixelEndcapName & >(id)) ;
}
int PixelToFEDAssociateFromAscii::operator()(const PixelBarrelName & id) const
{
for (BarrelConnections::const_iterator
ibc = theBarrel.begin(); ibc != theBarrel.end(); ibc++) {
for (vector<Bdu>::const_iterator
ibd = (*ibc).second.begin(); ibd != (*ibc).second.end(); ibd++) {
if ( ibd->b == id.shell()
&& ibd->l.inside( id.layerName() )
&& ibd->z.inside( id.moduleName() )
&& ibd->f.inside( id.ladderName() ) ) return (*ibc).first;
}
}
edm::LogError("** PixelToFEDAssociateFromAscii WARNING, name: ")
<< id.name()<<" not associated to FED";
return -1;
}
int PixelToFEDAssociateFromAscii::operator()(const PixelEndcapName & id) const
{
for (EndcapConnections::const_iterator
iec = theEndcap.begin(); iec != theEndcap.end(); iec++) {
for (vector<Edu>::const_iterator
ied = (*iec).second.begin(); ied != (*iec).second.end(); ied++) {
if ( ied->e == id.halfCylinder()
&& ied->d.inside( id.diskName() )
&& ied->b.inside( id.bladeName() ) ) return iec->first;
}
}
edm::LogError("** PixelToFEDAssociateFromAscii WARNING, name: ")
<< id.name()<<" not associated to FED";
return -1;
}
void PixelToFEDAssociateFromAscii::init(const string & cfg_name)
{
LogDebug("init, input file:") << cfg_name.c_str();
std::ifstream file( cfg_name.c_str() );
if ( !file ) {
edm::LogError(" ** PixelToFEDAssociateFromAscii,init ** ")
<< " cant open data file: " << cfg_name;
return;
} else {
edm::LogInfo("PixelToFEDAssociateFromAscii, read data from: ") <<cfg_name ;
}
string line;
pair< int, vector<Bdu> > barCon;
pair< int, vector<Edu> > endCon;
try {
while (getline(file,line)) {
//
// treat # lines
//
string::size_type pos = line.find("#");
if (pos != string::npos) line = line.erase(pos);
string::size_type posF = line.find("FED:");
string::size_type posB = line.find("S:");
string::size_type posE = line.find("E:");
LogDebug ( "line read" ) << line;
//
// treat version lines, reset date
//
if ( line.compare(0,3,"VER") == 0 ) {
edm::LogInfo("version: ")<<line;
theVersion = line;
send(barCon,endCon);
theBarrel.clear();
theEndcap.clear();
}
//
// fed id line
//
else if ( posF != string::npos) {
line = line.substr(posF+4);
int id = atoi(line.c_str());
send(barCon,endCon);
barCon.first = id;
endCon.first = id;
}
//
// barrel connections
//
else if ( posB != string::npos) {
line = line.substr(posB+2);
barCon.second.push_back( getBdu(line) );
}
//
// endcap connections
//
else if ( posE != string::npos) {
line = line.substr(posE+2);
endCon.second.push_back( getEdu(line) );
}
}
send(barCon,endCon);
}
catch(exception& err) {
edm::LogError("**PixelToFEDAssociateFromAscii** exception")<<err.what();
theBarrel.clear();
theEndcap.clear();
}
//
// for debug
//
std::ostringstream str;
str <<" **PixelToFEDAssociateFromAscii ** BARREL FED CONNECTIONS: "<< endl;
for (BarrelConnections::const_iterator
ibc = theBarrel.begin(); ibc != theBarrel.end(); ibc++) {
str << "FED: " << ibc->first << endl;
for (vector<Bdu>::const_iterator
ibd = (*ibc).second.begin(); ibd != (*ibc).second.end(); ibd++) {
str << "b: "<<ibd->b<<" l: "<<ibd->l<<" z: "<<ibd->z<<" f: "<<ibd->f<<endl;
}
}
str <<" **PixelToFEDAssociateFromAscii ** ENDCAP FED CONNECTIONS: " << endl;
for (EndcapConnections::const_iterator
iec = theEndcap.begin(); iec != theEndcap.end(); iec++) {
str << "FED: " << iec->first << endl;
for (vector<Edu>::const_iterator
ied = (*iec).second.begin(); ied != (*iec).second.end(); ied++) {
str << " e: "<<ied->e<<" d: "<<ied->d<<" b: "<<ied->b<<endl;
}
}
edm::LogInfo("PixelToFEDAssociateFromAscii")<<str.str();
}
void PixelToFEDAssociateFromAscii::send(
pair<int,vector<Bdu> > & b, pair<int,vector<Edu> > & e)
{
if (b.second.size() > 0) theBarrel.push_back(b);
if (e.second.size() > 0) theEndcap.push_back(e);
b.second.clear();
e.second.clear();
}
PixelToFEDAssociateFromAscii::Bdu PixelToFEDAssociateFromAscii::getBdu( string line) const
{
Bdu result;
string::size_type pos;
result.b = readRange(line).first;
pos = line.find("L:");
if (pos != string::npos) line = line.substr(pos+2);
result.l = readRange(line);
pos = line.find("Z:");
if (pos != string::npos) line = line.substr(pos+2);
result.z = readRange(line);
pos = line.find("F:");
if (pos != string::npos) line = line.substr(pos+2);
result.f = readRange(line);
return result;
}
PixelToFEDAssociateFromAscii::Edu PixelToFEDAssociateFromAscii::getEdu( string line) const
{
Edu result;
string::size_type pos;
result.e = readRange(line).first;
pos = line.find("D:");
if (pos != string::npos) line = line.substr(pos+2);
result.d = readRange(line);
pos = line.find("B:");
if (pos != string::npos) line = line.substr(pos+2);
result.b = readRange(line);
return result;
}
PixelToFEDAssociateFromAscii::Range
PixelToFEDAssociateFromAscii::readRange( const string & l) const
{
bool first = true;
int num1 = -1;
int num2 = -1;
const char * line = l.c_str();
while (line) {
char * evp = 0;
int num = strtol(line, &evp, 10);
{ stringstream s; s<<"raad from line: "; s<<num; LogDebug(s.str()); }
if (evp != line) {
line = evp +1;
if (first) { num1 = num; first = false; }
num2 = num;
} else line = 0;
}
if (first) {
string s = "** PixelToFEDAssociateFromAscii, read data, cant intrpret: " ;
edm::LogInfo(s) << endl
<< l << endl
<<"=====> send exception " << endl;
s += l;
throw cms::Exception(s);
}
return Range(num1,num2);
}
|
If farmers are to increase food production and food security, they need better access to agricultural support systems, such as credit, technology, extension services and agricultural education, as well as to the rural organizations that often channel other services. Both men and women smallholders and poor farmers have frequently been cut off from these essential agricultural support systems, which seldom take into account the different responsibilities and needs of men and women farmers. In spite of their enormous potential and their crucial roles in agricultural production, women in particular have insufficient access to production inputs and support services.
This trend underlines the need to implement measures aimed at enhancing the access of small farmers, especially women, to production inputs - particularly since the working environment of development organizations has
changed as a result of market liberalization and a reduced role for the state worldwide. National agricultural extension systems are no exception to this rule, and must respond by making internal and external adjustments. Great attention is required so that the adjustments do not become detrimental to women and men small farmers. For example, FAO's field experiences over the last decade have pointed to the need for extension programmes that are more strategically planned, needs-based, participatory and problem solving.
Women's access to and use of agricultural support systems is also severely limited by the heavy burden on time and energy that results from their triple responsibilities - productive activities (such as work in the fields), reproductive activities (such as child rearing, cooking and household chores) and community management.
In order to improve production, farmers need access to financial capital. Buying seeds, fertilizer and other agricultural inputs often requires short-term loans, which are repaid when the crops are harvested. Installing major improvements, such as irrigation pumps, or acquiring new technology that increases future yields is impossible without access to long-term credit.
Smallholders, particularly women, often face difficulties in obtaining credit. This is a direct consequence of their lacking access to land, participation in development projects and extension programmes and membership in rural organizations, all of which are important channels for obtaining loans and credit information. In several countries of sub-Saharan Africa, where women and men farmers are roughly equal in number, it is estimated that women farmers receive only 10 percent of the loans granted to smallholders and less than 1 percent of the total credit advanced to the agriculture sector.
Credit delivery can be improved by setting up microfinance institutions in rural areas and reorienting the banking system to cater to the needs of small farmers, especially women. The Grameen Bank in Bangladesh, which first pioneered the microcredit approach in 1976, currently reaches more than 2 million people. Since it was founded, the bank has lent more than US$2.1 billion, most of it in the form of loans of a few hundred dollars for small agriculture, distribution, crafts and trading enterprises. Numerous studies have shown that women are generally more reliable and punctual in repaying their loans than men are.
A programme providing credit and nutrition for women significantly improved both the participating women's incomes and their children's nutritional status. This is the conclusion of a study that examined the impact of a credit and education programme run by the NGO Freedom from Hunger.
Men and women smallholders also suffer financially from limited access to the marketing services that would allow them to turn surplus produce into cash income. Women face particular difficulties because marketing infrastructure and organizations are rarely geared towards either small-scale producers or the crops that women grow. Although women all over the world are active as traders, hawkers and street and market vendors, little has been done to improve transport and market facilities to support this vital economic sector. Even where rural women play an important role in wholesale trade, their full membership in marketing service institutions is still difficult because they may be illiterate or lack independent legal status.
Planning for action
The FAO Gender and Development Plan of Action includes commitments by different Divisions of FAO to increasing the equality of access to a wide range of agricultural support systems, including markets, credit, technology, extension and training.
Rural finance and marketing services
Rural groups and organizations
Agricultural research and technology
Agricultural education and extension
Microcredit and education boost incomes and nutrition
Astudy examined the impact of a microcredit and educational programme implemented by the NGO Freedom from Hunger. In Ghanaian villages, women who participated in the programme used microcredit loans to launch income-generating activities such as preparing and selling palm oil, fish and cooked foods. They increased their non-farm income by $36 per month, twice as much as the women who had not taken part in the programme. Through the programme's educational component, participating women also gained valuable knowledge about their children's nutrition and heath needs.
Membership of cooperatives, farmers' organizations, trade unions and other organizations represents one of the best ways for rural men and women to gain access to resources, opportunities and decision-making. Cooperatives and farmers' associations generally make it possible for farmers to share the costs and rewards of services that they could not afford on their own. They can be an invaluable channel for obtaining technology, information, training and credit. They can also give smallholders a much louder voice in local and national decision-making. By instituting common food processing, storage and marketing activities, organizations can increase the exchange of goods and services and the access to national and regional markets.
Participation in such organizations can be especially important to smallholders and poor farmers, both men and women. But women are frequently deterred from joining because membership is often restricted to recognized landowners or heads of household. Even when women are responsible for the day-to-day management of both households and holdings, their husbands or other male relatives are often considered the official heads.
In many regions, women farmers' membership of these organizations is restricted by custom. Where they are able to belong to rural organizations, women often do not share equally in either the decision-making or the benefits, and are excluded from leadership positions. Furthermore, their many household chores may make it impossible for them to attend meetings and devote the time that is necessary for full participation. Investment in labour-saving technologies to relieve the burden of women's unpaid productive and reproductive tasks is needed in order to given them more free time.
In recent years there has been some success in reducing the obstacles to women's participation in rural organizations. At the same time, the use and establishment of traditional and new women's groups to promote women's participation in rural development has grown rapidly. However, experience has shown that women's empowerment often requires a step-by-step process to remove the barriers to their membership in organizations that are traditionally dominated by men. Furthermore, it is necessary to give them support, individually or collectively, to enable them to gain the knowledge and self-confidence needed to make choices and take greater control of their lives.
In all regions of the developing world, women typically work far longer hours than men do. Studies in Asia and Africa show that women work as much as 13 extra hours a week. As a result, they may have little available time to seek out support services, and very different priorities for the kind of support required.
Overall, the agricultural research agenda has neglected the needs of smallholders, especially women farmers, and failed to take advantage of their invaluable knowledge about traditional farming methods, indigenous plant and animal varieties and coping techniques for local conditions. Such knowledge could hold the key to developing sustainable approaches that combine modern science with the fruits of centuries of experimentation and adaptation by men and women farmers.
Most research has focused on increasing the yields of commercial crops and staple grains on high-input farms, where high-yielding varieties can be cultivated under optimal conditions. Smallholders can rarely afford these technology «packages», which are also generally ill suited to the climatic and soil conditions in areas where most of the rural poor live. The crops that farmers in such areas rely on and the conditions that they face have not featured prominently in agricultural research. Sorghum and millet, for example, have received very little research attention and funding, despite their high nutritional value and ability to tolerate difficult conditions. Similarly, relatively little research has been devoted to the secondary crops grown by women, which often provide most of their family's nutritional needs.
In addition, agricultural tools and implements are also rarely designed to fit women's physical capabilities or work, so they do not meet women's needs. The impact of new technologies is seldom evaluated from a gender perspective. The introduction of harvesting, threshing and milling machinery, for example, has very little direct effect on yields but eliminates thousands of hours of paid labour. According to one study, if all the farmers in Punjab, India, who cultivate more than 4 ha were to use combine harvesters, they would lose more than 40 million paid working days, without any increase in farm production or cropping intensity. Most of the lost labour and income would be women's.
«Schools where men and women farmers learn how to increase yields and reduce their reliance on pesticides by relying on natural predators.»
Developing technology to meet women's specific needs can yield major gains in food production and food security. In Ghana, for example, technology was introduced to improve the irrigation of women's off-season crops. Larger and more reliable harvests increased both food and economic security during the periods between major crops. In El Salvador, where women play an extremely important role in agriculture, it is estimated that as many as 60 percent of households are headed by women. One of the major goals of this country's agriculture sector reform was to improve research and extension activities by focusing on the role of women smallholders. To address women farmers' needs, the project promoted women's participation to help guide the research programme at National Agricultural Technology Centre farms.
Farmer field schools in Cambodia
In fields across Cambodia, men and women farmers gather every week to go to school. They are among the 30 000 Cambodian farmers - more than one-third of them women - who have taken part in FAO-supported farmer field schools (FFS). In the schools, farmers observe how crops develop and monitor pests throughout the growing season. They also learn how natural predators, such as wasps and spiders, can help control pests and how the heavy use of pesticides often kills them off, leaving crops even more vulnerable. These schools emphasize the active participation and empowerment of both men and women farmers. In at least six provinces in Cambodia, farmers have formed integrated pest management (IPM) groups after completing their training, and are carrying out further field studies and experiments. More than 300 farmers have completed additional training and are now organizing farmer field schools in their own areas. «;I always knew pesticides were bad for my health,» one participant said, «but now I know for sure.» After completing the school, farmers rely more on cultural practices and natural enemies to control pests, and experience fewer cases of poisoning.
Agricultural extension programmes provide farmers with a lifeline of information about new technologies, plant varieties and market opportunities. In almost all countries, however, the agricultural extension system fails to reach women farmers effectively. Among other reasons, this is because they are excluded from rural organizations. An FAO survey showed that, worldwide, female farmers receive only 5 percent of all agricultural extension services and only 15 percent of agricultural extension agents are women. In Egypt, where women make up more than half of the agricultural labour force, only 1 percent of extension officers are female.
«An FAO extension project in Honduras that focused on woman-to-woman training boosted both subsistence production and household food security.»
This reflects the lack of information and understanding about the important role played by women. Extension services usually focus on commercial rather than subsistence crops, which are grown mainly by women and which are often the key to household food security. Available data rarely reflect women's responsibility for much of the day-to-day work and decision-making on the family farm. Nor do they recognize the many other important food production and food processing activities that women commonly perform, such as home gardening, tending livestock, gathering fuel or carrying water.
Extension programmes can be tailored to address women's priority needs only when men and women farmers are listened to at the village level and when such methods as participatory rural appraisal are employed. In recent years, a number of countries have launched determined efforts to make their extension services more responsive to women's needs. In the Gambia, for example, the proportion of female agricultural extension workers has increased from 5 percent in 1989 to more than 60 percent today. Growth in the number of female extension workers has been matched by increased attention to women's involvement and priorities. A special effort has been made to encourage women's participation in small ruminant and poultry extension services.
In Nicaragua, efforts to ensure that extension services match client needs - including giving more attention to the diverse needs of men and women farmers - led to increased use of those services, by 600 percent for women and 400 percent for men.
Extension programmes that fail to take women into account also fail to address the improved technology and methods that might yield major gains in productivity and food security. Furthermore, they often schedule training times and locations that make it impossible for women to participate, in addition to existing socio-cultural reasons.
Recommended new approaches include the Strategic Extension Campaign (SEC), which was developed by FAO and introduced in Africa, the Near East, Asia and Latin America. This methodology emphasizes how important it is for field extension workers and small farmers to participate in the strategic planning, systematic management and field implementation of agricultural extension and training programmes. Its extension strategies and messages are specifically developed and tailored to the results of a participatory problem identification and needs assessment.
Training Programme for Women's Incorporation in Rural Development
Several hundred peasant women in Honduras were trained to serve as «food production liaisons». After receiving their training, the liaisons worked with grassroots women's groups. They focused on impoverished rural areas where chronic malnutrition is widespread and 70 percent of all breastfeeding mothers suffer from vitamin A deficiency. Women involved with the project increased the subsistence production of nutritious foods. Credits to develop poultry production proved an effective way of increasing motivation, nutritional levels and incomes. Some of the grassroots women's groups involved with the project sought credit through extension agencies or from the Rotating Fund for Peasant Women. The credit was used to initiate other social and productive projects, including purchasing a motorized maize mill and planting soybeans for milk.
|
Open Access
Zika Virus on a Spreading Spree: what we now know that was unknown in the 1950’s
Virology Journal201613:165
Received: 22 August 2016
Accepted: 26 September 2016
Published: 6 October 2016
Zika virus (ZIKV) is a mosquito-borne flavivirus that is transmitted through the bite of Aedes spp mosquitoes and less predominantly, through sexual intercourse. Prior to 2007, ZIKV was associated with only sporadic human infections with minimal or no clinical manifestations. Recently the virus has caused disease outbreaks from the Pacific Islands, the Americas, and off the coast of West Africa with approximately 1.62 million people suspected to be infected in more than 60 countries around the globe. The recent ZIKV outbreaks have been associated with guillain-barré syndrome, congenital syndrome (microcephaly, congenital central nervous system anomalies), miscarriages, and even death. This review summarizes the path of ZIKV outbreak within the last decade, highlights three novel modes of ZIKV transmission associated with recent outbreaks, and points to the hallmarks of congenital syndrome. The review concludes with a summary of challenges facing ZIKV research especially the control of ZIKV infection in the wake of most recent data showing that anti-dengue virus antibodies enhance ZIKV infection.
Zika virus Sexual transmission Neurological development Microcephaly Antibody-dependent enhancement
• ZIKV can be transmitted sexually and during pregnancy from a mother to her fetus.
• In addition to microcephaly, ZIKV causes miscarriage.
• ZIKV persists in whole blood for close to 2 months.
• Dengue viral infections enhance ZIKV infections.
• Anti-ZIKV antibodies in domestic animals suggest that ZIKV can infect domestic animals.
Zika virus (ZIKV) is a positive-sense single-stranded RNA virus in the genus Flavivirus. The virus is related to other flaviviruses (dengue viruses-DENVs, yellow fever virus-YFV, Japanese encephalitis virus, St. Louis encephalitis virus, West Nile virus, tick-borne encephalitis virus, Langat virus, Powassan virus, Modoc virus, Rio Bravo virus) in terms of genome size and genome organization [1, 2] (Fig. 1). The genome is ~10.7 kb in length and codes for a single polyprotein (~10.2 kb), which is processed into three structural proteins (Capsid-C, pre-Membrane/Membrane-prM/M and Envelope-E) and seven nonstructural proteins (NS1, NS2A, NS2B, NS3, NS4A, NS4B, and NS5) (Fig. 1). The genome is flanked by the 5’ and 3’ untranslated regions (UTRs) [1, 3, 4]. ZIKV has been associated with a lot of disease outbreaks within the last decade. With this in consideration, we searched peer-reviewed articles, government news briefings in Associated Press, and press releases by international organizations for information related to ZIKV. We searched specifically for ZIKV isolation, what was known when the virus was first isolated (clinical manifestations), modes of transmission, the vectors that can transmit the virus, disease outbreaks especially from 2007, and the clinical/pathological manifestations observed in recent outbreaks. The information gathered from these searches was then used to write this review, which summarizes incidences & global distribution of ZIKV infection (within the last decade), modes of transmission, and challenges associated with ZIKV research.
Fig. 1
ZIKV genome. The linear genome is made up of structural proteins, nonstructural proteins, and UTRs. The 5’ and 3’ UTRs are ~107 nucleotides and ~428 nucleotides , respectively
Although ZIKV is closely related to other flaviviruses in terms of genome size and genome organization, the virus is most closely related to DENVs and YFV in terms of mosquito vector transmission. ZIKV is a mosquito-borne virus that is transmitted primarily through the bite of Aedes spp mosquitoes. The virus was first isolated in 1947 from monkeys in Zika forest in Uganda, Africa [5]. Since then, the virus has been isolated from other Aedes mosquitoes (Aedes aegypti, Aedes albopictus, Aedes africanus, Aedes hensilli, Aedes polynesiensis, Aedes furcifer, Aedes vitattus) [6] and more recently from a domestic mosquito, Culex quinquefasciatus [7]. In sylvatic habitats, ZIKV is transmitted in an enzootic cycle between non-human primates by mosquitoes. In an epidemic cycle, the virus is transmitted between humans primarily by infected mosquitoes [5, 8, 9] (Fig. 2). Introduction of the virus to a human community may be initiated by a spillover mosquito from sylvatic habitats; alternatively, the virus may be imported by humans from countries with ZIKV breakout. Antibodies against ZIKV have been detected in domestic animals such as goats, sheep, rodents [10] but it is unknown whether mosquitoes can transmit the virus between domestic animals or between domestic animals and humans.
Fig. 2
ZIKV transmission cycles. ZIKV is transmitted in sylvatic habitats in an enzootic cycle by infected mosquitoes to rhesus monkeys and vice versa. Humans can be infected with the virus in sylvatic habitats following a mosquito bite or if there is a spillover of an infected mosquito from sylvatic habitats (middle dotted black line) to rural/urban areas. An epidemic cycle starts when humans are bitten by an infected mosquito followed by viral replication in humans and viremia. The virus can spread to the reproductive organs and can be transmitted during sexual intercourse. Infected pregnant women can also transmit the virus to the fetus during pregnancy. The virus can then be transmitted from an infected person back to mosquitoes through mosquito bites. The virus then replicates in mosquitoes and it is transmitted back to humans and the cycle continues. It is not known whether the virus can be transmitted by mosquitoes between domestic animals and humans (right dotted gray lines with question mark) or whether the virus can be transmitted sexually between monkeys (left dotted gray line with question mark)
Phylogenetically, ZIKV can be divided into two main lineages—African and Asian—based on geographic origin. The African lineage is further sub-divided into West and East African sub-lineages [11]. ZIKV has been associated with a number of sporadic human infections, based on the detection of anti-ZIKV antibodies in serum, starting from 1952 in Africa [12] and 1954 in Asia [13]. In 2007, the virus caused the first major outbreak in the Pacific Islands [14], which later spread to other countries.
1. a)
ZIKV outbreak path since 2007
From 2007, ZIKV outbreaks have been reported in many islands and continents as follows:
1. i)
Pacific Islands: In 2007, a ZIKV outbreak from autochthonous transmission was reported in Yap Island in the Federated States of Micronesia with 185 people infected (includes confirmed, probable, and suspected cases) [14]. This outbreak was caused by the Asian lineage of ZIKV. Six years later (in 2013), another outbreak was reported ~5000 miles from Yap Island, in French Polynesia (Fig. 3); more than 28,000 people were infected with ZIKV in this outbreak [15, 16]. The ZIKV strain in the French Polynesia outbreak had 99.9 % nucleotide and amino acid identities with the Asian strain outbreak in the Yap Island [1, 3, 15] suggesting that the French Polynesia outbreak originated from Yap Island. Given the distance between the two Islands, it is unlikely that the virus was introduced into French Polynesia my mosquitoes; this suggests that ZIKV was imported into French Polynesia. The French Polynesia outbreak was subsequently spread to other Pacific Islands. Towards the end of 2013, imported cases from French Polynesia were reported in New Caledonia and cases of autochthonous transmission were reported in January 2014 [17, 18] (Fig. 3). At the same time in January, an outbreak was reported in Easter Island, off the coast of Chile [19] and in February, another outbreak was reported in Cook Islands [17, 18], (Fig. 3). The nucleotide sequence of the ZIKV strain in Easter Island was 99.9 % identical to the ZIKV strain in the outbreak in French Polynesia, thus suggesting that the Easter Island outbreak originated from French Polynesia. Then in March 2015, first cases of ZIKV outbreak (from autochthonous transmission) were reported in Bahia, Brazil. Nucleotide sequence analysis from this outbreak showed 99 % identity with the ZIKV strain that caused the 2013 outbreak in French Polynesia [20] thus suggesting that ZIKV was introduced to the Americas from any of the Pacific Islands (French Polynesia, New Caledonia, Easter Island, or Cook Islands). It is likely that the outbreaks in almost all these islands including the one in Brazil were first imported to these islands/country by an infected individual(s), who later served as reservoir host(s) for mosquito transmission to naïve individuals; ZIKV vectors are endemic in the Pacific Islands and in Brazil [21, 22]. Alternatively, the virus could have been transmitted sexually from an infected traveler to a naïve person in any of these countries.
Fig. 3
ZIKV outbreaks and transmission paths in the Pacific Islands. The first ZIKV outbreak in the Islands was reported in Yap Island in Micronesia (2007) and it was later transmitted (indicated as red arrow number 1) to French Polynesia in 2013. From French Polynesia, the virus was then transmitted to New Caledonia, Easter Island, and to Cook Island (in the order listed and indicated as red arrow numbers 2, 3, 4)
2. ii)
The Americas: The first cases of ZIKV outbreak in Latin America were reported in Brazil (March 2015). Since then, more than 1.5 million people are estimated to have been infected in Brazil alone [23]. Mosquito-transmitted cases of the virus have been reported throughout the Americas (except Canada and Chile) [2326], as predicted by the World Health Organization (WHO) and Pan American Health Organization (PAHO) (Fig. 4), with more than 65,000 confirmed and suspected cases reported in Colombia alone [27]. Although the number of cases are decreasing in most countries in the Americas and the Caribbean [28], the number of mosquito-transmitted cases are increasing in some countries. For example, as of mid September, more than 85 cases of autochthonous transmissions have been reported in the State of Florida in continental United States [29]. Furthermore, the number of ZIKV imported and sexually transmitted cases continue to increase; more than 3130 imported cases (numbers correct as of mid September) have been reported in the continental United States since the outbreak started in Brazil [30] (Fig. 4). In Cuba, the number of imported cases increased from 1 case in March to 33 cases (3 local transmissions) as of mid September [31]. In summary, an estimated 1.6 million people are suspected to be infected in the Americas.
Fig. 4
Autochthonous and imported cases of ZIKV around the world from 2015. ZIKV was first introduced to the Americas (Brazil) from the Pacific Island (indicated as red dotted circle with red arrow). From Brazil, the virus then spread to countries in South America, Central America, the Caribbean, and off the cost of West Africa (Cape Verde). Countries with only imported ZIKV cases, from the Americas, are shown in purple color. Countries with both mosquito-transmitted and imported cases are shown in different colors with an estimated number of suspected ZIKV cases in the respective countries. Countries with white colors have not reported any imported cases since 2015 (numbers correct as of September)
3. iii)
Off the cost of West Africa, Cape Verde: ZIKV outbreak has also been reported in the Islands of Cape Verde, off the coast of West Africa. The first ZIKV cases in the Islands were reported in late September to mid October of 2015 and as of August 2016, more than 7550 people were already infected [32] (Fig. 4). The genetic sequence of the ZIKV strain in Cape Verde has recently been determined and it is identical to the Asian strain in the outbreak in Brazil [32]. The virus was likely imported into the Islands from Brazil by traveler(s) and it was subsequently transmitted from the traveler(s) to naïve individuals through mosquito bites or through sexual contact. Brazil and Cape Verde are close countries (distance-wise) that speak the same language (Portuguese), share almost the same culture and as such, their citizens travel frequently between the two countries. This made it very easy for an asymptomatic ZIKV-infected individual to carry the virus from one country to the other.
It is worth mentioning that since the 2015 outbreak in Brazil, imported cases of ZIKV by travelers from countries with outbreaks have been reported all over the world.
1. b)
Imported cases around the world:
In North America, the number of ZIKV imported cases from Latin America continue to increase. As of mid September 2016, the number of imported ZIKV infections reported in Canada were more than 279 [33] (Fig. 4). The number of imported cases are also increasing in Europe; the number of imported cases increased from 224 in March 2016 to 1265 cases in August 2016. The highest number of imported cases have been reported in France and Spain [34]. In Eurasia/Asia, 6 imported cases have been reported in Russia and more than 21 cases have been reported in China [35, 36]. In Singapore and Thailand, more than 300 and 200 cases have been reported, respectively (numbers correct as of mid September) [37]. In Africa, 1 imported case from Columbia has been reported in South Africa [38]. In-between the Pacific and Indian Oceans, 12 and >44 imported cases have been reported in Hawaii [30] and Australia [39], respectively. Thus, the virus has been imported to at least one country in every continent except Antarctica, making this the first ZIKV pandemic the world has ever experienced.
ZIKV was believed (until less than a decade ago) to be transmitted to humans only through the bites of Aedes spp mosquitoes. Recently, other modes of human transmission have been documented as follows:
1. i)
Sexual transmission. ZIKV has been detected in the urine and semen of ZIKV-infected patients [40, 41] with more than 30 cases of sexual transmission from male to female, 1 case from male to male and 1 from female to a male have been reported [4247]. This observation shows that the virus can be transmitted between both sexes but the highest frequency of transmission is from male to female.
2. ii)
Vertical transmission from mother to fetus. ZIKV has been detected in amniotic fluid, fetal brain, and also in the serum of babies, 4 days after birth [4851], thus demonstrating that the virus can be transmitted to the fetus during pregnancy (Fig. 2).
3. iii)
Blood transfusion. Two cases of ZIKV transmission by blood transfusion have been reported in Brazil [52]. This observation will make ZIKV transmission more complicated given the fact that a majority of ZIKV-infected patients do not show symptoms; in fact, 3 % of ZIKV asymptomatic blood donors have tested positive for ZIKV [53]. This makes it very easy for the virus to be transmitted from blood donors to blood recipients. The situation is also exacerbated by the fact that the virus can persists in whole blood of patients for close to 2 months [46, 54].
Clinical manifestations
When anti-ZIKV antibodies were first detected in human sera in the early 1950s, the authors pointed out that “The effects of this agent in man are quite unknown” [13]. The reason they made this statement was due to the fact that the population sampled at that time did not show any clinical or pathological manifestations, that could be associated with ZIKV infection. As mentioned above, most people infected with ZIKV are asymptomatic. However, 20–25 % of infected patients develop symptoms such as fever, skin rash, joint pains, headache, and conjunctivitis within 1 week after infection; in addition to this, some patients experience hematospermia [43, 46, 55]. Although ZIKV infection is not life threatening in healthy adults, the virus can cause the following debilitating conditions:
1. i)
Neurological problems such as guillain-barré syndrome (GBS; an autoimmune disease) [56]. Ninety-eight to a hundred % of patients diagnosed with GBS during the French Polynesia ZIKV outbreak had anti-ZIKV antibodies, compared to 56 % of patients without GBS [57]. The mechanism(s) underlying the contribution of these anti-ZIKV antibodies to GBS is still unknown.
2. ii)
Miscarriage and congenital syndrome such as microcephaly (a neurodevelopmental disorder whereby babies are born with an abnormally small head) or an abnormally developed congenital central nervous system [48, 56, 58, 59]. ZIKV infects a population of developing brain cells including embryonic forebrain-specific human neural progenitor cells, neurospheres and brain organoids thus causing increased cell death, cell cycle dysregulation and ultimately reduced cell growth [60, 61]. These developmental changes are probably the hallmarks of congenital syndrome. In fact, these observations may explain the reason behind the increase in the number of congenital syndrome cases reported in ZIKV-infected countries in the Americas and Cape Verde; more than 16 countries have reported cases of ZIKV-related congenital syndrome (Fig. 5). In Brazil alone, 1911 cases have been confirmed with 371 neonatal deaths reported; in Columbia and Cape Verde, 40 and 14 cases, respectively, of ZIKV-related congenital syndrome have also been reported (numbers correct as of September) [31, 62]. Additionally, cases of ZIKV-related miscarriages have been reported in other countries [51, 59]. These devastating effects have prompted many countries to advise pregnant women to avoid visiting regions (most recently, Florida in the US, Singapore and Thailand in Asia) with ZIKV outbreaks [42, 63]. Overall, ZIKV infection seems to have the highest morbidity in newborn infants.
Fig. 5
Countries and territories with cases of ZIKV-related congenital syndrome (microcephaly or congenital central nervous system anomalies). ZIKV-related congenital syndrome cases have been reported in Brazil, Colombia, Cape Verde, Martinique, Panama, El Salvador, Paraguay, French Guiana, Puerto Rico, Canada, the United States, Costa Rica, Guatemala, Honduras, Dominican Republic, Haiti, and Suriname
Challenges associated with ZIKV infections
1. i)
There are no vaccines to protect against ZIKV infection or drugs to treat infected patients. The majority of ZIKV infected patients recover from the infection and do not need treatment. However, as mentioned above, transmission of the virus from pregnant women to fetuses affects normal fetal neurological developments. As such, women, especially those who plan to be become pregnant, need to be immunized. Thus, there is an urgent need to develop a vaccine to stop the spread of ZIKV infection, especially from pregnant women to fetuses. The development of an effective ZIKV vaccine will be challenging for the following reasons:
1. a)
although 9 % of anti-dengue virus monoclonal antibodies can cross-neutralize ZIKV infections, a majority of anti-dengue virus antibodies are not neutralizing [64]; instead, they enhance ZIKV and dengue viral infections, a condition known as antibody-dependent enhancement of infection [65, 66]. This observation may complicate ZIKV infections in countries were ZIKV and dengue viral infections co-circulate and especially in countries (Brazil, Mexico, El Salvador, and the Philippines) where dengue virus vaccine (dengvaxia) has been licensed; the effect of the vaccine on ZIKV infection needs to be evaluated. With this in mind, an ideal ZIKV vaccine should not enhance dengue infection and vice versa.
2. b)
a candidate ZIKV vaccine should protect against all ZIKV strains. Additionally, it should lack the propensity to be accidentally transmitted by mosquitoes from vaccinees to unvaccinated population. This will be a big challenge for an attenuated ZIKV vaccine given the fact that the virus is transmitted by mosquito-bites.
3. c)
the vaccine should be safe in pregnant women in order to avoid complications during pregnancy.
4. d)
an effective ZIKV vaccine should elicit a systemic immune response in addition to genital immunity given the fact that the virus can also be transmitted sexually. A ZIKV vaccine that cannot protect against sexual transmission may not be highly valuable.
1. ii)
ZIKV is transmitted by some of the same Aedes mosquitoes (e.g. Aedes aegypti) that transmit DENVs, YFV, and Chikungunya virus (an alphavirus) [18, 67]. To aggravate the situation, the symptoms (fever, skin rash, joint pains, and headache) for ZIKV infection are similar to those caused by these three viruses. As such, most ZIKV infections are clinically misdiagnosed as DENV infections.
2. iii)
Serological tests for ZIKV targeting the envelope glycoprotein domains, EI and EII, are not specific; they cross-react with other flaviviruses such as DENVs and YFV [1, 4, 65, 68]. These domain-cross-reactive antibodies can misdiagnose ZIKV infections as dengue and vice versa.
3. iv)
ZIKV RNA/viral particles have been isolated or detected in nasopharynx [69], saliva [49, 70, 71], and in breast milk [49]. Nevertheless, it is not known if the virus can be transmitted through saliva, nasal secretions, or breast milk. Moreover, it is unknown if anti-ZIKV antibodies are present in these secretions. Studies are needed to assess if the virus can be transmitted through these routes and if anti-ZIKV antibodies are present in body secretions such as saliva and breast milk.
4. v)
Although ZIKV RNA and viral particles have been isolated from saliva, non-invasive methods using saliva to diagnose ZIKV are lacking. As such, there is need for rapid diagnostic kits for detecting ZIKV infection using saliva.
5. vi)
Although we know that ZIKV can be transmitted sexually from male to female, male to male, and female to male, there has been no report on transmission from female to female couples. Studies are required to assess transmission between female-sex partners.
6. vii)
Recent isolation of ZIKV infection from Culex quinquefasciatus mosquitoes [7] may also make the control of ZIKV infection challenging. Culex quinquefasciatus just like Aedes mosquitoes is a domesticated mosquito that breeds in standing water, is widespread in the Americas (except Canada), Africa, Asia, the United Kingdom, and Pacific Islands [72], and feeds on humans, domestic animals, including birds [73]. Thus, measures to control Aedes mosquitoes also have to take the control of Culex quinquefasciatus mosquitoes into consideration.
7. viii)
It is unknown why some patients (65 %) with anti-ZIKV antibodies do not develop GBS whereas some patients do. Does the genetic make-up of an individual pre-dispose that individual to ZIKV-associated GBS or microcephaly?
ZIKV infection is a major public health problem that has already spread to many countries around the globe, and is likely to spread to more given the fact that the virus can be transmitted sexually and by mosquitoes to humans. Imported ZIKV by asymptomatic travelers will likely be transmitted to sexual partners thus increasing the number of infected people and consequently the availability of ZIKV-infected blood meal for naïve mosquitoes. A blood meal from infected patients, following a mosquito bite, will be all it takes to establish an outbreak in countries with imported cases, as has been demonstrated in the United States. This observation puts China, Singapore, Thailand and Europe, which all have Aedes aegypti and Aedes albopictus mosquitoes, at high risk for mosquito transmission given the fact that ZIKV has already been imported to these regions [21, 22, 37]. An infected mosquito in any of these regions can cross, irrespective of country borders, from one country to another thus increasing the spread of the virus. It is also likely that people moving across country borders will spread the virus to other countries.
We now know, unlike in the 1950s, that ZIKV cannot only be transmitted through mosquito bites; it can also be transmitted sexually, via blood transfusion, and from mother to fetus. We also know that the virus is associated with symptoms such as joint pains, skin rash, and that it causes neurological problems such as GBS, microcephaly and abnormally developed congenital central nervous system. Nevertheless, there are a lot of things we still do not know about ZIKV; we do not know whether the virus can be transmitted through nasal secretions, saliva or breast milk. Also, there are still unanswered questions as to why the outbreak in Brazil spread to almost all of the Americas and why this outbreak had a higher mortality and morbidity compared to sporadic outbreaks prior to 2007. With these in consideration, there is an urgent need to develop vaccines and therapeutics to prevent or stop the spread of ZIKV infections. In addition to this, there is an urgent need to develop robust diagnostic tests that can detect and discriminate ZIKV infections from other flaviviruses (DENVs, Chikungunya, and YFV), 5 days after onset of symptoms. Fortunately, recent studies have shown that antibodies targeting NS1 proteins are ZIKV-specific and can be used to develop ZIKV-specific diagnostic kits [65]. If an antigen diagnostic test targeting ZIKV NS1 is successfully developed, it can be used to diagnose ZIKV infections from the onset of viremia. In fact, a ZIKV antigen diagnostic test will be more valuable because they can detect ZIKV infections prior to the appearance of anti-ZIKV antibodies in serum. Such a test can be used along-side with a specific molecular test, such as reverse transcription (RT) PCR, to confirm ZIKV infection. Until then, specific detection of ZIKV infections has to rely on RT-PCR.
We would like to thank Mr. Lukai Zhai (Michigan Technological University) and Dr. Kathryn Frietze (University of New Mexico School of Medicine) for reading the review and for their critical comments and suggestions.
Start-up fund from Michigan Technological University.
Availability of data and material
Not applicable.
Authors’ contributions
RB and ET wrote the review. RB and ET generated the figures. Both authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Consent for publication
Not applicable.
Ethics approval and consent to participate
Not applicable.
Authors’ Affiliations
Department of Biological Sciences, Michigan Technological University
1. Lanciotti RS, Kosoy OL, Laven JJ, Velez JO, Lambert AJ, Johnson AJ, Stanfield SM, Duffy MR. Genetic and serologic properties of Zika virus associated with an epidemic, Yap State, Micronesia, 2007. Emerg Infect Dis. 2008;14(8):1232–9.View ArticlePubMedPubMed CentralGoogle Scholar
3. Baronti C, Piorkowski G, Charrel RN, Boubis L, Leparc-Goffart I, de Lamballerie X. Complete coding sequence of zika virus from a French polynesia outbreak in 2013. Genome Announc. 2014;2(3):1–2.Google Scholar
4. Fonseca K, Meatherall B, Zarra D, Drebot M, MacDonald J, Pabbaraju K, Wong S, Webster P, Lindsay R, Tellier R. First case of Zika virus infection in a returning Canadian traveler. Am J Trop Med Hyg. 2014;91(5):1035–8.View ArticlePubMedPubMed CentralGoogle Scholar
5. Dick GW, Kitchen SF, Haddow AJ. Zika virus. I. Isolations and serological specificity. Trans R Soc Trop Med Hyg. 1952;46(5):509–20.View ArticlePubMedGoogle Scholar
6. Hayes EB. Zika virus outside Africa. Emerg Infect Dis. 2009;15(9):1347–50.View ArticlePubMedPubMed CentralGoogle Scholar
7. Fiocruz identifies Culex in Recife with the potential to transmit the virus zika []. Accessed 2 Aug 2016.
8. Monlun E, Zeller H, Le Guenno B, Traore-Lamizana M, Hervy JP, Adam F, Ferrara L, Fontenille D, Sylla R, Mondo M, et al. [Surveillance of the circulation of arbovirus of medical interest in the region of eastern Senegal]. Bull Soc Pathol Exot. 1993;86(1):21–8.PubMedGoogle Scholar
9. Saluzzo JF, Ivanoff B, Languillat G, Georges AJ. [Serological survey for arbovirus antibodies in the human and simian populations of the South-East of Gabon (author’s transl)]. Bull Soc Pathol Exot Filiales. 1982;75(3):262–6.PubMedGoogle Scholar
10. Darwish MA, Hoogstraal H, Roberts TJ, Ahmed IP, Omar F. A sero-epidemiological survey for certain arboviruses (Togaviridae) in Pakistan. Trans R Soc Trop Med Hyg. 1983;77(4):442–5.View ArticlePubMedGoogle Scholar
11. Haddow AD, Schuh AJ, Yasuda CY, Kasper MR, Heang V, Huy R, Guzman H, Tesh RB, Weaver SC. Genetic characterization of Zika virus strains: geographic expansion of the Asian lineage. PLoS Negl Trop Dis. 2012;6(2):e1477.View ArticlePubMedPubMed CentralGoogle Scholar
12. Smithburn KC. Neutralizing antibodies against certain recently isolated viruses in the sera of human beings residing in East Africa. J Immunol. 1952;69(2):223–34.PubMedGoogle Scholar
13. Smithburn KC, Kerr JA, Gatne PB. Neutralizing antibodies against certain viruses in the sera of residents of India. J Immunol. 1954;72(4):248–57.PubMedGoogle Scholar
14. Duffy MR, Chen TH, Hancock WT, Powers AM, Kool JL, Lanciotti RS, Pretrick M, Marfel M, Holzbauer S, Dubray C, et al. Zika virus outbreak on Yap Island, Federated States of Micronesia. N Engl J Med. 2009;360(24):2536–43.View ArticlePubMedGoogle Scholar
15. Cao-Lormeau VM, Roche C, Teissier A, Robin E, Berry AL, Mallet HP, Sall AA, Musso D. Zika virus, French polynesia, South pacific, 2013. Emerg Infect Dis. 2014;20(6):1085–6.PubMedPubMed CentralGoogle Scholar
16. Zika virus infection outbreak, French Polynesia []. Accessed 2 Aug 2016.
17. Roth A, Mercier A, Lepers C, Hoy D, Duituturaga S, Benyon E, Guillaumot L, Souares Y. Concurrent outbreaks of dengue, chikungunya and Zika virus infections - an unprecedented epidemic wave of mosquito-borne viruses in the Pacific 2012-2014. Euro Surveill. 2014;19(41):1–8.Google Scholar
18. Dupont-Rouzeyrol M, O’Connor O, Calvez E, Daures M, John M, Grangeon JP, Gourinat AC. Co-infection with Zika and dengue viruses in 2 patients, New Caledonia, 2014. Emerg Infect Dis. 2015;21(2):381–2.View ArticlePubMedPubMed CentralGoogle Scholar
19. Tognarelli J, Ulloa S, Villagra E, Lagos J, Aguayo C, Fasce R, Parra B, Mora J, Becerra N, Lagos N, et al. A report on the outbreak of Zika virus on Easter Island, South Pacific, 2014. Arch Virol. 2015;161:665–668.Google Scholar
20. Campos GS, Bandeira AC, Sardi SI. Zika virus outbreak, Bahia, Brazil. Emerg Infect Dis. 2015;21(10):1885–6.View ArticlePubMedPubMed CentralGoogle Scholar
21. Randolph SE, Rogers DJ. The arrival, establishment and spread of exotic diseases: patterns and predictions. Nat Rev Microbiol. 2010;8(5):361–71.View ArticlePubMedGoogle Scholar
22. Campbell LP, Luther C, Moo-Llanes D, Ramsey JM, Danis-Lozano R, Peterson AT. Climate change influences on global distributions of dengue and chikungunya virus vectors. Philos Trans R Soc London Ser B Biol Sci. 2015;370(1665):1–9.Google Scholar
23. ZIKA SITUATION REPORT []. Accessed 2 Aug 2016.
24. Zika virus ‘spreading explosively,’ WHO leader says []. Accessed 17 July 2016.
25. PAHO Statement on Zika Virus Transmission and Prevention []. Accessed 18 July 2016.
26. Yakob L, Walker T. Zika virus outbreak in the Americas: the need for novel mosquito control methods. Lancet Glob Health. 2016;4(3):e148–9.View ArticlePubMedGoogle Scholar
27. Pacheco O, Beltran M, Nelson CA, Valencia D, Tolosa N, Farr SL, Padilla AV, Tong VT, Cuevas EL, Espinosa-Bode A, et al. Zika Virus Disease in Colombia - Preliminary Report. N Engl J Med. 2016. [Epub ahead of print].Google Scholar
28. Zika - Epidemiological Update 8 September 2016 []. Accessed 20 Sept 2016.
29. Department of Health Daily Zika Update []. Accessed 20 Sept 2016.
30. Case Counts in the US []. Accessed 20 Sept 2016.
31. Zika cases and congenital syndrome associated with Zika virus reported by countries and territories in the Americas, 2015 - 2016 Cumulative cases []. Accessed 20 Sept 2016.
32. WHO confirms Zika virus strain imported from the Americas to Cabo Verde []. Accessed 17 July 2016.
33. Surveillance of Zika virus [ - s1]. Accessed 20 Sept 2016.
34. Zika virus disease epidemic [ virus-Americas, Caribbean, Oceania.pdf]. Accessed 20 Sept 2016.
35. China reports 22nd imported Zika virus case []. Accessed 17 July 2016.
36. Sixth Case of Zika Virus Recorded in Russia []. Accessed 20 Sept 2016.
37. About 200 Zika cases recorded in Thailand: ministry []. Accessed 20 Sept 2016.
38. Zika virus []. Accessed 17 July 2016.
39. Zika virus - notifications of Zika virus infection (Zika) []. Accessed 20 Sept 2016.
40. Shinohara K, Kutsuna S, Takasaki T, Moi ML, Ikeda M, Kotaki A, Yamamoto K, Fujiya Y, Mawatari M, Takeshita N, et al. Zika fever imported from Thailand to Japan, and diagnosed by PCR in the urines. J Travel Med. 2016;23(1):1–3.Google Scholar
41. Gourinat AC, O’Connor O, Calvez E, Goarant C, Dupont-Rouzeyrol M. Detection of Zika virus in urine. Emerg Infect Dis. 2015;21(1):84–6.View ArticlePubMedPubMed CentralGoogle Scholar
42. CDC Adds New Zika Warning for Pregnant Women and Their Sex Partners []. Accessed 17 July 2016.
43. Foy BD, Kobylinski KC, Chilson Foy JL, Blitvich BJ, Travassos da Rosa A, Haddow AD, Lanciotti RS, Tesh RB. Probable non-vector-borne transmission of Zika virus, Colorado, USA. Emerg Infect Dis. 2011;17(5):880–2.View ArticlePubMedPubMed CentralGoogle Scholar
44. Frank C, Cadar D, Schlaphof A, Neddersen N, Gunther S, Schmidt-Chanasit J, Tappe D. Sexual transmission of Zika virus in Germany, April 2016. Euro Surveill. 2016;21(23):1–4.Google Scholar
45. Freour T, Mirallie S, Hubert B, Splingart C, Barriere P, Maquart M, Leparc-Goffart I. Sexual transmission of Zika virus in an entirely asymptomatic couple returning from a Zika epidemic area, France, April 2016. Euro Surveill. 2016;21(23)1–3.Google Scholar
46. Musso D, Roche C, Robin E, Nhan T, Teissier A, Cao-Lormeau VM. Potential sexual transmission of Zika virus. Emerg Infect Dis. 2015;21(2):359–61.View ArticlePubMedPubMed CentralGoogle Scholar
47. Deckard DT, Chung WM, Brooks JT, Smith JC, Woldai S, Hennessey M, Kwit N, Mead P. Male-to-male sexual transmission of Zika Virus - Texas, January 2016. MMWR Morb Mortal Wkly Rep. 2016;65(14):372–4.View ArticlePubMedGoogle Scholar
48. Mlakar J, Korva M, Tul N, Popovic M, Poljsak-Prijatelj M, Mraz J, Kolenc M, Resman Rus K, Vesnaver Vipotnik T, Fabjan Vodusek V, et al. Zika Virus Associated with Microcephaly. N Engl J Med. 2016;374(10):951–8.Google Scholar
49. Besnard M, Lastere S, Teissier A, Cao-Lormeau V, Musso D. Evidence of perinatal transmission of Zika virus, French Polynesia, December 2013 and February 2014. Euro Surveill. 2014;19(13):1–4.Google Scholar
50. Calvet G, Aguiar RS, Melo AS, Sampaio SA, de Filippis I, Fabri A, Araujo ES, de Sequeira PC, de Mendonca MC, de Oliveira L, et al. Detection and sequencing of Zika virus from amniotic fluid of fetuses with microcephaly in Brazil: a case study. Lancet Infect Dis. 2016;16(6):653–60.View ArticlePubMedGoogle Scholar
51. Martines RB, Bhatnagar J, Keating MK, Silva-Flannery L, Muehlenbachs A, Gary J, Goldsmith C, Hale G, Ritter J, Rollin D, et al. Notes from the field: evidence of Zika Virus infection in brain and placental tissues from two congenitally infected newborns and two fetal losses--Brazil, 2015. MMWR Morb Mortal Wkly Rep. 2016;65(6):159–60.View ArticlePubMedGoogle Scholar
52. Brazil reports Zika infection from blood transfusions []. Accessed 18 July 2016.
53. Musso D, Nhan T, Robin E, Roche C, Bierlaire D, Zisou K, Shan Yan A, Cao-Lormeau VM, Broult J. Potential for Zika virus transmission through blood transfusion demonstrated during an outbreak in French Polynesia, November 2013 to February 2014. Euro Surveill. 2014;19(14):1–3.Google Scholar
54. Lustig Y, Mendelson E, Paran N, Melamed S, Schwartz E. Detection of Zika virus RNA in whole blood of imported Zika virus disease cases up to 2 months after symptom onset, Israel, December 2015 to April 2016. Euro Surveill. 2016;21(26):1–4.Google Scholar
55. WHO. Zika virus infection and Zika fever: Frequently asked questions []. Accessed 17 July 2016.
56. WHO. Neurological Syndrome, congenital malformations, and Zika virus infection. Implications for public health in the Americas []. Accessed 17 July 2016.
57. Cao-Lormeau V-M, Blake A, Mons S, Lastère S, Roche C, Vanhomwegen J, Dub T, Baudouin L, Teissier A, Larre P, et al. Guillain-Barré Syndrome outbreak associated with Zika virus infection in French Polynesia: a case-control study. Lancet. 2016;387(10027):1531–9.View ArticlePubMedGoogle Scholar
58. Rasmussen SA, Jamieson DJ, Honein MA, Petersen LR. Zika Virus and birth defects--reviewing the evidence for causality. N Engl J Med. 2016;374(20):1981–7.View ArticlePubMedGoogle Scholar
59. van der Eijk AA, van Genderen PJ, Verdijk RM, Reusken CB, Mogling R, van Kampen JJ, Widagdo W, Aron GI, GeurtsvanKessel CH, Pas SD, et al. Miscarriage associated with Zika Virus infection. N Engl J Med. 2016;375(10):1002–4.View ArticlePubMedGoogle Scholar
60. Garcez PP, Loiola EC, Madeiro da Costa R, Higa LM, Trindade P, Delvecchio R, Nascimento JM, Brindeiro R, Tanuri A, Rehen SK. Zika virus impairs growth in human neurospheres and brain organoids. Science. 2016;352(6287):816–8.View ArticlePubMedGoogle Scholar
61. Tang H, Hammack C, Ogden SC, Wen Z, Qian X, Li Y, Yao B, Shin J, Zhang F, Lee EM, et al. Zika Virus infects human cortical neural progenitors and attenuates their growth. Cell Stem Cell. 2016;18(5):587–90.View ArticlePubMedGoogle Scholar
62. Zika - Epidemiological Update 29 July 2016. []. Accessed 2 Aug 2016.
63. Zika virus infection: Global Update []. Accessed 2 Aug 2016.
64. Swanstrom JA, Plante JA, Plante KS, Young EF, McGowan E, Gallichotte EN, Widman DG, Heise MT, de Silva AM, Baric RS. Dengue Virus Envelope Dimer Epitope Monoclonal Antibodies Isolated from Dengue Patients Are Protective against Zika Virus. MBio. 2016;7(4):1–8.Google Scholar
65. Stettler K, Beltramello M, Espinosa DA, Graham V, Cassotta A, Bianchi S, Vanzetta F, Minola A, Jaconi S, Mele F, et al. Specificity, cross-reactivity and function of antibodies elicited by Zika virus infection. Science. 2016;353(6301):823–6.View ArticlePubMedGoogle Scholar
66. Dejnirattisai W, Supasa P, Wongwiwat W, Rouvinski A, Barba-Spaeth G, Duangchinda T, Sakuntabhai A, Cao-Lormeau VM, Malasit P, Rey FA, et al. Dengue virus sero-cross-reactivity drives antibody-dependent enhancement of infection with zika virus. Nat Immunol. 2016;17(9):1102–8.View ArticlePubMedGoogle Scholar
67. Caron M, Paupy C, Grard G, Becquart P, Mombo I, Nso BB, Kassa Kassa F, Nkoghe D, Leroy EM. Recent introduction and rapid dissemination of Chikungunya virus and Dengue virus serotype 2 associated with human and mosquito coinfections in Gabon, central Africa. Clin Infect Dis. 2012;55(6):e45–53.View ArticlePubMedGoogle Scholar
68. Fokam EB, Levai LD, Guzman H, Amelia PA, Titanji VP, Tesh RB, Weaver SC. Silent circulation of arboviruses in Cameroon. East Afr Med J. 2010;87(6):262–8.PubMedGoogle Scholar
69. Leung GH, Baird RW, Druce J, Anstey NM. Zika Virus infection in Australia following a monkey bite in Indonesia. Southeast Asian J Trop Med Public Health. 2015;46(3):460–4.PubMedGoogle Scholar
70. Musso D, Roche C, Nhan TX, Robin E, Teissier A, Cao-Lormeau VM. Detection of Zika virus in saliva. J Clin Virol. 2015;68:53–5.View ArticlePubMedGoogle Scholar
71. Bonaldo MC, Ribeiro IP, Lima NS, Dos Santos AA, Menezes LS, da Cruz SO, de Mello IS, Furtado ND, de Moura EE, Damasceno L, et al. Isolation of infective Zika Virus from urine and saliva of patients in Brazil. PLoS Negl Trop Dis. 2016;10(6):e0004816.View ArticlePubMedPubMed CentralGoogle Scholar
72. Culex quinquefasciatus (Say) southern house or brown mosquito. 2008 [ quinquefasciatus new profile Feb 08.pdf]. Accessed 17 July 2016.
73. Garcia-Rejon JE, Blitvich BJ, Farfan-Ale JA, Lorono-Pino MA, Chi Chim WA, Flores-Flores LF, Rosado-Paredes E, Baak-Baak C, Perez-Mutul J, Suarez-Solis V, et al. Host-feeding preference of the mosquito, Culex quinquefasciatus, in Yucatan State, Mexico. J Insect Sci. 2010;10:32.View ArticlePubMedPubMed CentralGoogle Scholar
© The Author(s). 2016
|
/**
* @file reflect_loss_beckmann.h
* Models ocean surface reflection loss using Beckmann-Spizzichino model.
*/
#pragma once
#include <usml/ocean/reflect_loss_model.h>
#include <usml/ocean/wave_height_pierson.h>
namespace usml {
namespace ocean {
/// @ingroup boundaries
/// @{
/**
* Models ocean surface reflection loss using Beckmann-Spizzichino model.
* Jones et. al. has shown that this model can be broken into high and low
* frequency components. The high frequency component is given by:
* \f[
* RL_{high} = -20 \: log_{10} \left( \sqrt{1-v_3} \right)
* \f]\f[
* v_3 = max \left( \frac{1}{2} sin \theta, \left[ 1 -
* \frac{ exp(-a \theta^2 / 4 ) }{ \sqrt{ \pi a \theta^2 } } \right] sin \theta
* \right)
* \f]
* where
* \f$ a = \frac{1}{ 2 ( 0.003 + 5.1x10^{-3} w ) } \f$,
* \f$ w \f$ = wind speed (m/sec), and
* \f$ v_3 \f$ is limited to a 0.99 value.
* Note that the high frequency component is frequency independent.
* The low frequency component is given by:
* \f[
* RL_{1ow} = -20 \: log_{10}
* \left( 0.3 + \frac{0.7}{1+6.0x10^{-11} w^4 f^2 } \right)
* \f]
* where
* \f$ f \f$ = signal frequency (Hz).
* Note that the low frequency component is grazing angle independent.
* The total reflection loss is the sum of these two terms in dB.
*
* @xref Adrian D. Jones, Janice Sendt, Alec J. Duncan, Paul A. Clarke and
* Amos Maggi, "Modelling the acoustic reflection loss at the rough
* ocean surface," Proceedings of ACOUSTICS 2009, Australian Acoustical Society,
* 23-25 November 2009, Adelaide, Australia.
*/
class USML_DECLSPEC reflect_loss_beckmann: public reflect_loss_model {
public:
/**
* Initializes ocean surface reflection loss using using
* Beckmann-Spizzichino model.
*
* @param wind_speed Wind_speed used to develop rough seas (m/s).
*/
reflect_loss_beckmann( double wind_speed ) :
_wind_speed( wind_speed )
{
}
/**
* Computes the broadband reflection loss and phase change.
*
* @param location Location at which to compute reflection loss.
* @param frequencies Frequencies over which to compute loss. (Hz).
* @param angle Grazing angle relative to the interface (radians).
* @param amplitude Change in ray intensity in dB (output).
* @param phase Change in ray phase in radians (output).
* Hard-coded to a value of PI for this model.
* Phase change not computed if this is NULL.
*/
virtual void reflect_loss(const wposition1& location,
const seq_vector& frequencies, double angle,
vector<double>* amplitude, vector<double>* phase = NULL) ;
private:
/** Wind speed (m/sec). */
const double _wind_speed;
};
/// @}
} // end of namespace ocean
} // end of namespace usml
|
#include <iostream>
#include <fstream>
#include <vector>
#include <algorithm>
#include <stdlib.h>
#include "bloom.h"
#include "binary_io.h"
using namespace std;
int main(int argc, char *argv[])
{
try{
if(argc != 2){
cerr << "Usage: " << argv[0] << " <binary metadata file>" << endl;
return EXIT_SUCCESS;
}
ifstream fin(argv[1], ios::binary);
if(!fin){
cerr << "Unable to open metadata file: " << argv[1] << endl;
return EXIT_FAILURE;
}
FilterInfo info;
size_t num_info;
binary_read(fin, num_info);
cout << "Metadata file contains " << num_info << " FilterInfo objects" << endl;
for(size_t i = 0;i < num_info;++i){
binary_read(fin, info);
if(info.run_accession == INVALID_ACCESSION){
cout << "Invalid run accession" << endl;
}
else{
cout << accession_to_str(info.run_accession) << endl;
}
cout << "\tspots : " << info.number_of_spots << endl;
cout << "\tbases : " << info.number_of_bases << endl;
cout << "\tdate_received : " << info.date_received << endl;
if(info.experiment_accession == INVALID_ACCESSION){
cout << "\texperiment_accession : Invalid" << endl;
}
else{
cout << "\texperiment_accession : " << accession_to_str(info.experiment_accession) << endl;
}
cout << "\texperiment_title : " << info.experiment_title << endl;
cout << "\texperiment_design_description : " << info.experiment_design_description << endl;
cout << "\texperiment_library_name : " << info.experiment_library_name << endl;
cout << "\texperiment_library_strategy : " << info.experiment_library_strategy << endl;
cout << "\texperiment_library_source : " << info.experiment_library_source << endl;
cout << "\texperiment_library_selection : " << info.experiment_library_selection << endl;
cout << "\texperiment_instrument_model : " << info.experiment_instrument_model << endl;
if(info.sample_accession == INVALID_ACCESSION){
cout << "\tsample_accession : Invalid" << endl;
}
else{
cout << "\tsample_accession : " << accession_to_str(info.sample_accession) << endl;
}
cout << "\tsample_taxa : " << info.sample_taxa << endl;
if( !info.sample_attributes.empty() ){
cout << "\tsample_attributes :" << endl;
for(MAP<string, string>::const_iterator j = info.sample_attributes.begin();
j != info.sample_attributes.end();++j){
cout << "\t\t" << j->first << " : " << j->second << endl;
}
}
if(info.study_accession == INVALID_ACCESSION){
cout << "\tstudy_accession : Invalid" << endl;
}
else{
cout << "\tstudy_accession : " << accession_to_str(info.study_accession) << endl;
}
cout << "\tstudy_title : " << info.study_title << endl;
cout << "\tstudy_abstract : " << info.study_abstract << endl;
}
}
catch(const char *error){
cerr << "Caught the error: " << error << endl;
return EXIT_FAILURE;
}
catch(...){
cerr << "Caught an unhandled error!" << endl;
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
|
#ifndef HeavyFlavorAnalysis_SpecificDecay_BPHBuToJPsiKBuilder_h
#define HeavyFlavorAnalysis_SpecificDecay_BPHBuToJPsiKBuilder_h
/** \class BPHBuToJPsiKBuilder
*
* Description:
* Class to build B+- to JPsi K+- candidates
*
* \author Paolo Ronchese INFN Padova
*
*/
//----------------------
// Base Class Headers --
//----------------------
#include "HeavyFlavorAnalysis/SpecificDecay/interface/BPHDecayToResTrkBuilder.h"
//------------------------------------
// Collaborating Class Declarations --
//------------------------------------
#include "HeavyFlavorAnalysis/RecoDecay/interface/BPHRecoBuilder.h"
#include "HeavyFlavorAnalysis/RecoDecay/interface/BPHRecoCandidate.h"
#include "HeavyFlavorAnalysis/RecoDecay/interface/BPHPlusMinusCandidate.h"
#include "HeavyFlavorAnalysis/SpecificDecay/interface/BPHParticleMasses.h"
#include "FWCore/Framework/interface/Event.h"
//---------------
// C++ Headers --
//---------------
#include <string>
#include <vector>
// ---------------------
// -- Class Interface --
// ---------------------
class BPHBuToJPsiKBuilder : public BPHDecayToResTrkBuilder {
public:
/** Constructor
*/
BPHBuToJPsiKBuilder(const edm::EventSetup& es,
const std::vector<BPHPlusMinusConstCandPtr>& jpsiCollection,
const BPHRecoBuilder::BPHGenericCollection* kaonCollection)
: BPHDecayToResTrkBuilder(es,
"JPsi",
BPHParticleMasses::jPsiMass,
BPHParticleMasses::jPsiMWidth,
jpsiCollection,
"Kaon",
BPHParticleMasses::kaonMass,
BPHParticleMasses::kaonMSigma,
kaonCollection) {
setResMassRange(2.80, 3.40);
setTrkPtMin(0.7);
setTrkEtaMax(10.0);
setMassRange(3.50, 8.00);
setProbMin(0.02);
setMassFitRange(5.00, 6.00);
setConstr(true);
}
// deleted copy constructor and assignment operator
BPHBuToJPsiKBuilder(const BPHBuToJPsiKBuilder& x) = delete;
BPHBuToJPsiKBuilder& operator=(const BPHBuToJPsiKBuilder& x) = delete;
/** Destructor
*/
~BPHBuToJPsiKBuilder() override {}
/** Operations
*/
/// set cuts
void setKPtMin(double pt) { setTrkPtMin(pt); }
void setKEtaMax(double eta) { setTrkEtaMax(eta); }
void setJPsiMassMin(double m) { setResMassMin(m); }
void setJPsiMassMax(double m) { setResMassMax(m); }
/// get current cuts
double getKPtMin() const { return getTrkPtMin(); }
double getKEtaMax() const { return getTrkEtaMax(); }
double getJPsiMassMin() const { return getResMassMin(); }
double getJPsiMassMax() const { return getResMassMax(); }
};
#endif
|
// -*- tab-width: 4; -*-
// vi: set sw=2 ts=4 expandtab:
// Copyright 2010-2020 The Khronos Group Inc.
// SPDX-License-Identifier: Apache-2.0
//!
//! @internal
//! @~English
//! @file
//!
//! @brief Create Images from PNG format files.
//!
//! @author Mark Callow, HI Corporation.
//! @author Jacob Ström, Ericsson AB.
//!
#include "stdafx.h"
#include <sstream>
#include <stdexcept>
#include "image.hpp"
#include "lodepng.h"
#include <KHR/khr_df.h>
#include "dfd.h"
void warning(const char *pFmt, ...);
Image*
Image::CreateFromPNG(FILE* src, bool transformOETF, Image::rescale_e rescale)
{
// Unfortunately LoadPNG doesn't believe in stdio plus
// the function we need only reads from memory. To avoid
// a potentially unnecessary read of the whole file check the
// signature ourselves.
uint8_t pngsig[8] = {
0x89, 0x50, 0x4e, 0x47, 0x0d, 0x0a, 0x1a, 0x0a
};
uint8_t filesig[sizeof(pngsig)];
if (fseek(src, 0L, SEEK_SET) < 0) {
std::stringstream message;
message << "Could not seek. " << strerror(errno);
throw std::runtime_error(message.str());
}
if (fread(filesig, sizeof(pngsig), 1, src) != 1) {
std::stringstream message;
message << "Could not read. " << strerror(errno);
throw std::runtime_error(message.str());
}
if (memcmp(filesig, pngsig, sizeof(pngsig))) {
throw Image::different_format();
}
// It's a PNG file.
// Find out the size.
size_t fsz;
fseek(src, 0L, SEEK_END);
fsz = ftell(src);
fseek(src, 0L, SEEK_SET);
// Slurp it into memory so we can use lodepng_inspect, to determine
// the data type, and lodepng_chunk_find.
std::vector<uint8_t> png;
png.resize(fsz);
if (fread(png.data(), 1L, png.size(), src) != png.size()) {
if (feof(src)) {
throw std::runtime_error("Unexpected end of file.");
} else {
std::stringstream message;
message << "Could not read. " << strerror(ferror(src));
throw std::runtime_error(message.str());
}
}
lodepng::State state;
unsigned int lodepngError;
uint32_t componentCount, componentBits;
uint32_t w, h;
// Find out the color type. As lodepng_inspect only reads the IHDR chunk,
// we must also check for presence of a tRNS chunk as it affects the
// target color type. This is so we can request the exact type we need
// when decoding. What a palaver! Sigh! However this is probably faster
// than telling the decoder to give us RGBA and potentially touching every
// pixel to extract only what we need.
lodepngError = lodepng_inspect(&w, &h, &state, png.data(), png.size());
if (lodepngError) {
std::stringstream message;
message << "PNG inspect error: " << lodepng_error_text(lodepngError)
<< ".";
throw std::runtime_error(message.str());
}
// Tell the decoder we want the same color type as the file. Exceptions
// to this are made later.
state.info_raw = state.info_png.color;
// Is there a tRNS chunk?
const unsigned char *pTrnsChunk = nullptr;
const unsigned char* pFirstChunk = &png.data()[33]; // 1st after header
pTrnsChunk = lodepng_chunk_find_const(pFirstChunk, &png.back(), "tRNS");
switch (state.info_png.color.colortype) {
case LCT_GREY:
componentCount = 1;
// TODO: Create 4-bit color type and rescale 1- & 2-bpp to that.
rescale = Image::eAlwaysRescaleTo8Bits;
break;
case LCT_RGB:
if (pTrnsChunk != nullptr) {
state.info_raw.colortype = LCT_RGBA;
componentCount = 4;
} else {
state.info_raw.colortype = LCT_RGB;
componentCount = 3;
}
break;
case LCT_PALETTE:
if (pTrnsChunk) {
state.info_raw.colortype = LCT_RGBA;
componentCount = 4;
} else {
state.info_raw.colortype = LCT_RGB;
componentCount = 3;
}
state.info_raw.bitdepth = 8; // Palette values are 8 bit RGBA
warning("Expanding %d-bit paletted image to %s",
state.info_png.color.bitdepth,
state.info_raw.colortype == LCT_RGBA ? "R8G8B8A8" : "R8G8B8");
break;
case LCT_GREY_ALPHA:
componentCount = 2;
break;
case LCT_RGBA:
componentCount = 4;
break;
default:
// To avoid potentially uninitialized variable warning.
componentCount = 0;
}
if (rescale == eAlwaysRescaleTo8Bits
|| (rescale == eRescaleTo8BitsIfLess
&& state.info_png.color.bitdepth < 8)) {
state.info_raw.bitdepth = 8;
if (state.info_png.color.bitdepth != 8) {
warning("Rescaling %d-bit image to 8 bits.",
state.info_png.color.bitdepth);
}
componentBits = 8;
} else {
componentBits = state.info_png.color.bitdepth;
}
uint8_t* imageData;
lodepngError = lodepng_decode(&imageData, &w, &h, &state,
png.data(), png.size());
if (imageData && !lodepngError) {
(void)lodepng_get_raw_size(w, h, &state.info_raw);
} else {
free(imageData);
std::stringstream message;
message << "PNG decode error. " << lodepng_error_text(lodepngError)
<< ".";
throw std::runtime_error(message.str());
}
Image* image = nullptr;
if (componentBits == 16 ) {
switch (componentCount) {
case 1: {
image = new r16image(w, h, (r16color*)imageData);
break;
} case 2: {
image = new rg16image(w, h, (rg16color*)imageData);
break;
} case 3: {
image = new rgb16image(w, h, (rgb16color*)imageData);
break;
} case 4: {
image = new rgba16image(w, h, (rgba16color*)imageData);
break;
}
}
} else {
switch (componentCount) {
case 1: {
image = new r8image(w, h, (r8color*)imageData);
break;
} case 2: {
image = new rg8image(w, h, (rg8color*)imageData);
break;
} case 3: {
image = new rgb8image(w, h, (rgb8color*)imageData);
break;
} case 4: {
image = new rgba8image(w, h, (rgba8color*)imageData);
break;
}
}
}
switch (componentCount) {
case 1:
image->colortype = Image::eLuminance; // Defined in PNG spec.
break;
case 2:
image->colortype = Image::eLuminanceAlpha; // ditto
break;
case 3:
image->colortype = Image::eRGB;
break;
case 4:
image->colortype = Image::eRGBA;
break;
}
// state will have been updated with the rest of the file info.
// Here is the priority of the color space info in PNG:
//
// 1. No color-info chunks: assume sRGB default or 2.2 gamma
// (up to the implementation).
// 2. sRGB chunk: use sRGB intent specified in the chunk, ignore
// all other color space information.
// 3. iCCP chunk: use the provided ICC profile, ignore gamma and
// primaries.
// 4. gAMA and/or cHRM chunks: use provided gamma and primaries.
//
// A PNG image could signal linear transfer function with one
// of these two options:
//
// 1. Provide an ICC profile in iCCP chunk.
// 2. Use a gAMA chunk with a value that yields linear
// function (100000).
//
// Using no. 1 above or setting transfer func & primaries from
// the ICC profile would require parsing the ICC payload.
if (state.info_png.srgb_defined) {
// intent is a matter for the user when a color transform
// is needed during rendering, especially when gamut
// mapping. It does not affect the meaning or value of the
// image pixels so there is nothing to do here.
image->setOetf(KHR_DF_TRANSFER_SRGB);
} else if (state.info_png.iccp_defined) {
delete image;
throw std::runtime_error("PNG file has an ICC profile chunk. "
"These are not supported");
} else if (state.info_png.gama_defined) {
if (state.info_png.gama_gamma == 100000)
image->setOetf(KHR_DF_TRANSFER_LINEAR);
else if (state.info_png.gama_gamma == 45455)
image->setOetf(KHR_DF_TRANSFER_SRGB);
else {
if (state.info_png.gama_gamma == 0) {
delete image;
throw std::runtime_error("PNG file has gAMA of 0.");
}
if (transformOETF) {
// What PNG calls gamma is the power to use for encoding.
// Elsewhere gamma is commonly used for the power to use for
// decoding. For example by spec. the value in the PNG file is
// gamma * 100000 so gamma of 45455 is .45455. The power for
// decoding is the inverse, i.e 1 / .45455 which is 2.2.
// The variable gamma below is for decoding and is 1 / gAMA.
float gamma = (float) 100000 / state.info_png.gama_gamma;
// 1.6667 is a very arbitrary cutoff.
if (componentBits == 8 && gamma > 1.6667f) {
image->transformOETF(decode_gamma, encode_sRGB, gamma);
image->setOetf(KHR_DF_TRANSFER_SRGB);
if (gamma > 3.3333f) {
warning("Transformed PNG image with gamma of %f to sRGB"
" gamma (~2.2)", gamma);
}
} else {
image->transformOETF(decode_gamma, encode_linear, gamma);
image->setOetf(KHR_DF_TRANSFER_LINEAR);
if (gamma > 1.3) {
warning("Transformed PNG image with gamma of %f to"
" linear", gamma);
}
}
} else {
// User is overriding color space info from file.
image->setOetf(KHR_DF_TRANSFER_UNSPECIFIED);
return image;
}
}
} else {
image->setOetf(KHR_DF_TRANSFER_SRGB);
}
if (state.info_png.chrm_defined
&& !state.info_png.srgb_defined && !state.info_png.iccp_defined) {
Primaries primaries;
primaries.Rx = (float)state.info_png.chrm_red_x / 100000;
primaries.Ry = (float)state.info_png.chrm_red_y / 100000;
primaries.Gx = (float)state.info_png.chrm_green_x / 100000;
primaries.Gy = (float)state.info_png.chrm_green_y / 100000;
primaries.Bx = (float)state.info_png.chrm_blue_x / 100000;
primaries.By = (float)state.info_png.chrm_blue_y / 100000;
primaries.Wx = (float)state.info_png.chrm_white_x / 100000;
primaries.Wy = (float)state.info_png.chrm_white_y / 100000;
image->setPrimaries(findMapping(&primaries, 0.002f));
}
return image;
}
|
/**
* Clever programming language
* Copyright (c) Clever Team
*
* This file is distributed under the MIT license. See LICENSE for details.
*/
#ifndef CLEVER_MODMANAGER_H
#define CLEVER_MODMANAGER_H
#ifdef CLEVER_MSVC
#include <unordered_map>
#else
#include <tr1/unordered_map>
#endif
#include <vector>
#include "core/module.h"
#include "core/ast.h"
namespace clever {
class Value;
class Environment;
class Driver;
/// Package manager
class ModManager {
public:
enum ImportKind {
NONE = 0,
TYPE = 1<<0,
FUNCTION = 1<<1,
ALL = TYPE | FUNCTION,
NAMESPACE = 1<<2
};
ModManager(Driver* driver)
: m_driver(driver), m_user(NULL) {}
~ModManager() {}
/// Initialization routine
void init();
/// Shutdown routine
void shutdown();
void setIncludePath(const std::string& path) { m_include_path = path; }
Module* getUserModule() const { return m_user; }
/// Adds a new package to the map
void addModule(const std::string&, Module*);
/// Imports the module to the current scope
ast::Node* importModule(Scope*, const std::string&,
size_t = ModManager::ALL, const CString* = NULL) const;
ast::Node* importFile(Scope*, const std::string&,
size_t = ModManager::ALL, const CString* = NULL) const;
void loadVar(Scope*, const CString*, Value*) const;
void loadModule(Scope*, Module*, size_t, const CString*) const;
void loadModuleContent(Scope*, Module*, size_t, const CString*, const std::string&) const;
void loadFunction(Scope*, const std::string&, Function*) const;
void loadType(Scope*, const std::string&, Type*) const;
private:
Driver* m_driver;
ModuleMap m_mods;
Module* m_user;
std::string m_include_path;
};
} // clever
#endif // CLEVER_MODMANAGER_H
|
/*********************************************************\
* File: SRPDeferredAmbient.cpp *
*
* Copyright (C) 2002-2013 The PixelLight Team (http://www.pixellight.org/)
*
* This file is part of PixelLight.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy of this software
* and associated documentation files (the "Software"), to deal in the Software without
* restriction, including without limitation the rights to use, copy, modify, merge, publish,
* distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in all copies or
* substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING
* BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
* DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
\*********************************************************/
//[-------------------------------------------------------]
//[ Includes ]
//[-------------------------------------------------------]
#include <PLRenderer/RendererContext.h>
#include <PLRenderer/Renderer/Program.h>
#include <PLRenderer/Renderer/ProgramUniform.h>
#include <PLRenderer/Renderer/ProgramAttribute.h>
#include <PLRenderer/Renderer/TextureBufferRectangle.h>
#include <PLRenderer/Effect/EffectManager.h>
#include "PLCompositing/FullscreenQuad.h"
#include "PLCompositing/Shaders/Deferred/SRPDeferredGBuffer.h"
#include "PLCompositing/Shaders/Deferred/SRPDeferredAmbient.h"
//[-------------------------------------------------------]
//[ Namespace ]
//[-------------------------------------------------------]
using namespace PLCore;
using namespace PLRenderer;
using namespace PLScene;
namespace PLCompositing {
//[-------------------------------------------------------]
//[ RTTI interface ]
//[-------------------------------------------------------]
pl_class_metadata(SRPDeferredAmbient, "PLCompositing", PLCompositing::SRPDeferred, "Scene renderer pass for deferred rendering ambient")
// Constructors
pl_constructor_0_metadata(DefaultConstructor, "Default constructor", "")
// Attributes
pl_attribute_metadata(ShaderLanguage, PLCore::String, "", ReadWrite, "Shader language to use (for example \"GLSL\" or \"Cg\"), if empty string, the default shader language of the renderer will be used", "")
pl_attribute_metadata(AmbientColor, PLGraphics::Color3, PLGraphics::Color3(0.2f, 0.2f, 0.2f), ReadWrite, "Ambient color", "")
// Overwritten PLScene::SceneRendererPass attributes
pl_attribute_metadata(Flags, pl_flag_type_def3(SRPDeferredAmbient, EFlags), 0, ReadWrite, "Flags", "")
pl_class_metadata_end(SRPDeferredAmbient)
//[-------------------------------------------------------]
//[ Public functions ]
//[-------------------------------------------------------]
/**
* @brief
* Default constructor
*/
SRPDeferredAmbient::SRPDeferredAmbient() :
ShaderLanguage(this),
AmbientColor(this),
Flags(this),
m_pProgramGenerator(nullptr)
{
}
/**
* @brief
* Destructor
*/
SRPDeferredAmbient::~SRPDeferredAmbient()
{
// Destroy the program generator
if (m_pProgramGenerator)
delete m_pProgramGenerator;
}
//[-------------------------------------------------------]
//[ Private virtual PLScene::SceneRendererPass functions ]
//[-------------------------------------------------------]
void SRPDeferredAmbient::Draw(Renderer &cRenderer, const SQCull &cCullQuery)
{
// Get the instance of the "PLCompositing::SRPDeferredGBuffer" scene renderer pass
SRPDeferredGBuffer *pSRPDeferredGBuffer = GetGBuffer();
if (pSRPDeferredGBuffer) {
// Get the fullscreen quad instance
FullscreenQuad *pFullscreenQuad = pSRPDeferredGBuffer->GetFullscreenQuad();
if (pFullscreenQuad) {
// Get the vertex buffer of the fullscreen quad
VertexBuffer *pVertexBuffer = pFullscreenQuad->GetVertexBuffer();
if (pVertexBuffer) {
// Get the texture buffer to use
TextureBufferRectangle *pTextureBuffer = pSRPDeferredGBuffer->GetRenderTargetTextureBuffer(0);
if (pTextureBuffer) {
// Get the shader language to use
String sShaderLanguage = ShaderLanguage;
if (!sShaderLanguage.GetLength())
sShaderLanguage = cRenderer.GetDefaultShaderLanguage();
// Create the program generator if there's currently no instance of it
if (!m_pProgramGenerator || m_pProgramGenerator->GetShaderLanguage() != sShaderLanguage) {
// If there's an previous instance of the program generator, destroy it first
if (m_pProgramGenerator) {
delete m_pProgramGenerator;
m_pProgramGenerator = nullptr;
}
// Choose the shader source codes depending on the requested shader language
if (sShaderLanguage == "GLSL") {
#include "SRPDeferredAmbient_GLSL.h"
m_pProgramGenerator = new ProgramGenerator(cRenderer, sShaderLanguage, sDeferredAmbient_GLSL_VS, "110", sDeferredAmbient_GLSL_FS, "110"); // OpenGL 2.0 ("#version 110")
} else if (sShaderLanguage == "Cg") {
#include "SRPDeferredAmbient_Cg.h"
m_pProgramGenerator = new ProgramGenerator(cRenderer, sShaderLanguage, sDeferredAmbient_Cg_VS, "arbvp1", sDeferredAmbient_Cg_FS, "arbfp1");
}
}
// If there's no program generator, we don't need to continue
if (m_pProgramGenerator) {
// Reset all render states to default
cRenderer.GetRendererContext().GetEffectManager().Use();
// Use stencil buffer?
if (!(GetFlags() & NoStencil)) {
// Enable stencil test - ignore pixels tagged with 1 within the stencil buffer
cRenderer.SetRenderState(RenderState::StencilEnable, true);
cRenderer.SetRenderState(RenderState::StencilRef, 1);
cRenderer.SetRenderState(RenderState::StencilFunc, Compare::NotEqual);
}
// Reset the program flags
m_cProgramFlags.Reset();
// Albedo used?
if (!(GetFlags() & NoAlbedo))
PL_ADD_FS_FLAG(m_cProgramFlags, FS_ALBEDO)
// Ambient occlusion used?
if (!(GetFlags() & NoAmbientOcclusion))
PL_ADD_FS_FLAG(m_cProgramFlags, FS_AMBIENTOCCLUSION)
// Self illumination used?
if (pSRPDeferredGBuffer->IsColorTarget3Used() && !(GetFlags() & NoSelfIllumination))
PL_ADD_FS_FLAG(m_cProgramFlags, FS_SELFILLUMINATION)
// Get a program instance from the program generator using the given program flags
ProgramGenerator::GeneratedProgram *pGeneratedProgram = m_pProgramGenerator->GetProgram(m_cProgramFlags);
// Make our program to the current one
if (pGeneratedProgram && cRenderer.SetProgram(pGeneratedProgram->pProgram)) {
// Set pointers to uniforms & attributes of a generated program if they are not set yet
GeneratedProgramUserData *pGeneratedProgramUserData = static_cast<GeneratedProgramUserData*>(pGeneratedProgram->pUserData);
if (!pGeneratedProgramUserData) {
pGeneratedProgram->pUserData = pGeneratedProgramUserData = new GeneratedProgramUserData;
Program *pProgram = pGeneratedProgram->pProgram;
// Vertex shader attributes
static const String sVertexPosition = "VertexPosition";
pGeneratedProgramUserData->pVertexPosition = pProgram->GetAttribute(sVertexPosition);
// Vertex shader uniforms
static const String sTextureSize = "TextureSize";
pGeneratedProgramUserData->pTextureSize = pProgram->GetUniform(sTextureSize);
// Fragment shader uniforms
static const String sAmbientColor = "AmbientColor";
pGeneratedProgramUserData->pAmbientColor = pProgram->GetUniform(sAmbientColor);
static const String sAlbedoMap = "AlbedoMap";
pGeneratedProgramUserData->pAlbedoMap = pProgram->GetUniform(sAlbedoMap);
static const String sSelfIlluminationMap = "SelfIlluminationMap";
pGeneratedProgramUserData->pSelfIlluminationMap = pProgram->GetUniform(sSelfIlluminationMap);
}
// Set program vertex attributes, this creates a connection between "Vertex Buffer Attribute" and "Vertex Shader Attribute"
if (pGeneratedProgramUserData->pVertexPosition)
pGeneratedProgramUserData->pVertexPosition->Set(pVertexBuffer, PLRenderer::VertexBuffer::Position);
// Set texture size
if (pGeneratedProgramUserData->pTextureSize)
pGeneratedProgramUserData->pTextureSize->Set(pTextureBuffer->GetSize());
// Ambient color
if (pGeneratedProgramUserData->pAmbientColor)
pGeneratedProgramUserData->pAmbientColor->Set(AmbientColor.Get());
// Albedo map
if (pGeneratedProgramUserData->pAlbedoMap) {
const int nTextureUnit = pGeneratedProgramUserData->pAlbedoMap->Set(pTextureBuffer);
if (nTextureUnit >= 0) {
cRenderer.SetSamplerState(nTextureUnit, Sampler::AddressU, TextureAddressing::Clamp);
cRenderer.SetSamplerState(nTextureUnit, Sampler::AddressV, TextureAddressing::Clamp);
cRenderer.SetSamplerState(nTextureUnit, Sampler::MagFilter, TextureFiltering::None);
cRenderer.SetSamplerState(nTextureUnit, Sampler::MinFilter, TextureFiltering::None);
cRenderer.SetSamplerState(nTextureUnit, Sampler::MipFilter, TextureFiltering::None);
}
}
// Self illumination map
if (pGeneratedProgramUserData->pSelfIlluminationMap) {
const int nTextureUnit = pGeneratedProgramUserData->pSelfIlluminationMap->Set(pSRPDeferredGBuffer->GetRenderTargetTextureBuffer(3));
if (nTextureUnit >= 0) {
cRenderer.SetSamplerState(nTextureUnit, Sampler::AddressU, TextureAddressing::Clamp);
cRenderer.SetSamplerState(nTextureUnit, Sampler::AddressV, TextureAddressing::Clamp);
cRenderer.SetSamplerState(nTextureUnit, Sampler::MagFilter, TextureFiltering::None);
cRenderer.SetSamplerState(nTextureUnit, Sampler::MinFilter, TextureFiltering::None);
cRenderer.SetSamplerState(nTextureUnit, Sampler::MipFilter, TextureFiltering::None);
}
}
// Draw the fullscreen quad
cRenderer.SetRenderState(RenderState::ScissorTestEnable, false);
pFullscreenQuad->Draw(true);
}
}
}
}
}
}
}
//[-------------------------------------------------------]
//[ Namespace ]
//[-------------------------------------------------------]
} // PLCompositing
|
You've got family at Ancestry.
Find more Hartoebben relatives and grow your tree by exploring billions of historical records. Taken every decade since 1790, the U.S. Federal Census can tell you a lot about your family. For example, from 1930 to 1940 there were 2 less people named Hartoebben in the United States — and some of them are likely related to you.
Start a tree and connect with your family.
Create, build, and explore your family tree.
What if you had a window into the history of your family? With historical records, you do. From home life to career, records help bring your relatives' experiences into focus. There were 13 people named Hartoebben in the 1930 U.S. Census. In 1940, there were 15% less people named Hartoebben in the United States. What was life like for them?
Picture the past for your ancestors.
In 1940, 11 people named Hartoebben were living in the United States. In a snapshot:
• 18% were children
• For 6 females, Helen was the most common name
• 1 owned their homes, valued on average at $3,200
• 91% rented out rooms to boarders
Learn where they came from and where they went.
As Hartoebben families continued to grow, they left more tracks on the map:
• They most commonly lived in Missouri
• 1 were first-generation Americans
• 8% were first-generation Americans
• 18% migrated within the United States from 1935 to 1940
|
#ifndef SETTINGSCOMBMAINDIALOG_HPP
#define SETTINGSCOMBMAINDIALOG_HPP
#pragma once
#include <memory>
#include "UDRollDownDialogs.hpp"
namespace UD {
class SettingsCombHeaderTabPage;
class SettingsCombListTabPage;
class ISettingsCombCurrTabPage;
class SettingsCombControl;
class SettingsCombDialogState;
}
namespace UD {
class UD_DLL_EXPORT SettingsCombMainDialog : public UD::RollDownModalDialog {
public:
class UD_DLL_EXPORT ISettingsCombDialogEnvironment : public DG::PanelObserver {
public:
virtual ~ISettingsCombDialogEnvironment ();
virtual GS::UniString GetDialogTitle () const = 0;
virtual USize GetTabPageCount () const { return 1; };
virtual bool IsHeaderPanelVisible () const {return true; };
virtual GS::Array<UD::ISettingsCombCurrTabPage*> CreateSettingsCombCurrTabPage (const GS::Array<UD::IRollPanel*>& rollPanelArray, std::shared_ptr<UD::SettingsCombControl> settingsCombControl) = 0;
};
class UD_DLL_EXPORT ICustomBottomPanelCreator {
public:
virtual ~ICustomBottomPanelCreator ();
virtual void CreateBottomPanel (UD::RollDownModalDialog& rollDownDialog) = 0;
};
protected:
GS::Array<GS::Ref<GS::Object>>& setsCombRefArr;
GS::Ref<ISettingsCombDialogEnvironment> dialogEnvironment;
std::shared_ptr<SettingsCombControl> settingsCombControl;
public:
SettingsCombMainDialog (GS::Ref<ISettingsCombDialogEnvironment> dialogEnvironment,
GS::Array<GS::Ref<GS::Object> >& setsCombRefArr,
std::shared_ptr<SettingsCombControl> settingsCombControl,
const SettingsCombDialogState* dialogState,
const GS::Guid& guid = GS::NULLGuid);
SettingsCombMainDialog (GS::Ref<ISettingsCombDialogEnvironment> dialogEnvironment,
GS::Array<GS::Ref<GS::Object> >& setsCombRefArr,
std::shared_ptr<SettingsCombControl> settingsCombControl,
const SettingsCombDialogState* dialogState,
GS::Ref <ICustomBottomPanelCreator> customBottomPanelCreator,
const GS::Guid& guid = GS::NULLGuid);
~SettingsCombMainDialog ();
SettingsCombDialogState GetDialogState () const;
private:
void InitCustomBottomPanel (GS::Ref <ICustomBottomPanelCreator> customBottomPanelCreator);
void InitMainTabPages (GS::Ref<ISettingsCombDialogEnvironment> settingsCombDialogEnvironment,
const SettingsCombDialogState* dialogState);
};
} //namespace UD
#endif
|
#ifndef ANALYSISGRAPH_H
#define ANALYSISGRAPH_H
#include <cstddef>
#include <vector>
class BayesRRmz;
class AnalysisGraph
{
public:
AnalysisGraph(BayesRRmz *bayes, size_t maxParallel = 12);
virtual ~AnalysisGraph();
virtual void exec(unsigned int numInds,
unsigned int numSnps,
const std::vector<unsigned int> &markerIndices) = 0;
protected:
BayesRRmz *m_bayes = nullptr;
size_t m_maxParallel = 12;
};
#endif // ANALYSISGRAPH_H
|
Resistant myzus persicae aphid threatens virus yellows control
COLD weather and widespread use of neonicotinoid seed treatments combined to limit virus yellows infection in the sugar beet crop in 2011 to the extent that just 0.5 per cent of the crop was affected.
Concern, however, is increasing over the developing risk of resistance to neonicotinoid insecticides used to control myzus persicae aphids, the virus yellows vector.
Resistance has been identified in France, Italy and Spain, in myzus persicae populations associated mainly with peach and nectarine trees, where topical applications of neonicotinoids are used for aphid control, said Dr Mark Stevens.
“If those aphids start moving north and adapt to colder climes then I worry because there is very little chemistry coming on stream to control them,” he said.
Rothamsted Research work had shown these aphids were also carrying MACE and kdr resistance mechanisms, conferring resistance to pirimicarb and pyrethoids. “We have little in the can to actually control them,” said Dr Stevens.
“There are no resistant varieties coming on stream in the immediate future and the problem we are faced with is we are not looking at one virus but a complex of up to four different viruses, making it a very difficult target to hit with a resistance gene,” he said.
He told sugar beet growers at the conference the situation was ‘one we need to keep an eye on’.
|
Catholic and Protestant reformers in the 16th century occasionally spoke scornfully of Anabaptists as "new monks," referring to Anabaptist insistence on holy living and intense spiritual life (e.g., TA Elsaß 1, 110-13). Anabaptists occasionally accepted the comparison (Klassen, William and Walter Klaassen, eds. and trans. The Writings of Pilgram Marpeck, Classics of the Radical Reformation, vol. 2. Scottdale, PA: Herald Press, 1978: 217) but more frequently rejected it (Klassen & Klaassen: 215-16; Menno, Writings, 369, 401), in part because monks often came from the socially privileged classes. Several scholars have used monastic history as an aid to interpret Anabaptism (Troeltsch, Ritschl, Davis, Snyder, Martin). Many Anabaptists and Mennonites, beginning with the Hutterite chronicle, pointed to quasi-monastic sectarian medieval movements, especially Waldenses, as forerunners of Anabaptism (these theories are promulgated or discussed by Keller, Gratz, Verduin, Durnbaugh). One of the most extensive efforts to relate monasticism and Anabaptism drew on both monastic and quasi-monastic traditions (Davis). Most scholars have carefully limited their interpretations to pointing out "intellectual parallels" or general similarities; some have argued for direct continuity and influence.
The crucial interpretive question revolves around the nature of monasticism: is it a nonconforming sectarian development critical of the institutional church (Workman) or an intensified institutional core of the ecclesiastical establishment? Or, did monasticism begin as a charismatic, lay, "sectarian" movement in the 4th century but become fully integrated into the sacramental, ecclesial, institutional church by the early Middle Ages (Rousseau, Martin)? How central the critical, separatist aspect of early monasticism is to monastic identity is disputed, even by those within the monastic community (Eoin de Bhaldraithe). Particularly significant in this regard is the distinction between contemplative monastic orders (Benedictine, Cistercian, Carthusian) and more lay-oriented, urban mendicant orders and houses of regular canons of the late Middle Ages (Franciscans, Dominicans, Augustinian Friars, Praemonstratensians, Augustinian Canons). The latter orders were associated with the middle class and were visibly and pastorally active; the former were often but not always associated with the nobility and lived in secluded and rural areas. Most Anabaptist links to "monks" appear to have been with the mendicants and canons regular. Michael Sattler is the main exception to this generalization.
Most interpreters agree that Anabaptists rejected the sacramental and institutional "culture-church" of the Middle Ages in favor of a voluntary, non-institutionalized, even anti-clerical church of the faithful few, in effect, reducing the church to a devout "monastic" core. At issue among scholars is whether the label "monastic" should properly be applied to a sectarian, pure church vision such as that held by Anabaptists, since most monks did not believe that the church was made up solely of monastics, rather, they believed that monks and nuns were part, perhaps the most important part, of the church. The qualities and virtues prized by Anabaptists and Mennonites (hospitality, humility, community, Gelassenheit, obedience, repentance, nonresistance, etc.) were also prime monastic virtues, although all medieval Catholics were exhorted to practice these same virtues.
Significant parallels to monastic spirituality are found in the Mennonite period of post-Anabaptist history in which Anabaptist first-generation identity was transformed into a sacramental, ecclesial, institutional, cultural (ethnic) faith, even though Mennonites, Amish, and Hutterites avoided the language of sacramental and institutional Christianity (Cronk, Martin). During the 1980s growing Mennonite concern about the role of single adults in the church has not yet taken account of the traditional Christian monastic theology, with its implications for both marriage and singleness. Recent scholarship on monasticism emphasizes the social role of celibate communities, which enhanced the role of marriage while creating a sphere of activity for those remaining unmarried (Brown, Leclercq). Further research is needed in all these areas of Anabaptist and Mennonite history and culture.
Cronk, Sandra. "Gelassenheit: The Rites of the Redemptive Process in the Old Order Amish and Old Order Mennonite Communities." PhD dissertation, U. of Chicago, 1977. See also Mennonite Quarterly Review 65 (1981): 5-44.
For Ritschl, Gratz, Verduin, Keller, and others: see Davis, Kenneth R. Anabaptism and Asceticism: A Study in Intellectual Origins. Scottdale, 1974: 27-31.
de Bhaldraithe, Eion. "Michael Sattler, Benedictine and Anabaptist." Downside Review 105 (April 1987): 111-131.
Durnbaugh, Donald F. "Theories of Free Church Origins." Mennonite Quarterly Review 41 (1968): 83-95.
Martin, Dennis D. "Monks, Mendicants and Anabaptists: Michael Sattler and the Benedictines Reconsidered." Mennonite Quarterly Review 60 (1986): 139-64. Reply by Snyder, C. Arnold. "Michael Sattler, Benedictine: Dennis Martin's Objections Reconsidered." Mennonite Quarterly Review 61 (1987): 251-79.
Martin, Dennis D. "Catholic Spirituality and Mennonite Discipleship." Mennonite Quarterly Review 62 (1988): 5-25.
Martin, Dennis D. "Nothing New under the Sun? Mennonites and History." Conrad Grebel Review 5 (1987): 1-27.
Snyder, C. Arnold. "The Monastic Origins of Swiss Anabaptist Sectarianism." Mennonite Quarterly Review 57 (1983): 5-26.
Snyder, C. Arnold. The Life and Thought of Michael Sattler. Scottdale, PA: Herald Press, 1984.
Troeltsch, Ernst. The Social Teachings of the Christian Churches. Translator: Olive Wyon. New York: Harper and Row, 1960: 239-46, 332-33.
For general information on monastic history, see:
Brown, Peter R. L. "The Notion of Virginity in the Early Church." Christian Spirituality: Origins to the 12th C. Editor:Bernard McGinn and John Meyendorff. New York: Crossroad (1985): 427-43.
Gründler, Otto. "Devotio Moderna." Christian Spirituality: High Middle Ages and Reformation. Editor: Jill Raitt. New York: Crossroad (1987): 176-93.
Knowles, David. Christian Monasticism. New York: McGraw-Hill, 1969.
Leclercq, Jean. Monks and Love in 12th-C. France. Oxford: Clarendon, 1979.
Novak, Michael. "The Free Churches and the Roman Church." Journal of Ecumenical Studies, 2 (1965): 426-47.
Rousseau, Phillip. Ascetics, Authority, and the Church in the Age of Jerome and Cassian. New York; Oxford, 1980.
Workman, Herbert B. The Evolution of the Monastic Ideal from the Earliest Times to the Coming of the Friars. 2nd edition. London, 1927, reprinted with introduction by David Knowles. Boston: Beacon, 1962.
Adapted by permission of Herald Press, Harrisonburg, Virginia, and Waterloo, Ontario, from Mennonite Encyclopedia, Vol. 5, pp. 601-602. All rights reserved. For information on ordering the encyclopedia visit the Herald Press website.
©1996-2013 by the Global Anabaptist Mennonite Encyclopedia Online. All rights reserved.
MLA style: Martin, Dennis D. "Monasticism." Global Anabaptist Mennonite Encyclopedia Online. 1987. Web. 18 May 2013. http://www.gameo.org/encyclopedia/contents/M653.html.
APA style: Martin, Dennis D. (1987). Monasticism. Global Anabaptist Mennonite Encyclopedia Online. Retrieved 18 May 2013, from http://www.gameo.org/encyclopedia/contents/M653.html.
|
CNET también está disponible en español.
Ir a español
Don't show this again
Solar system with Earth-size planet found
Astronomers studying a star 127 light-years away have discovered an intriguing solar system with at least five Neptune-class planets and possibly two more, including an Earth-size world.
After six years of painstaking observations, astronomers have identified a distant solar system with at least five Neptune-class worlds orbiting within 130 million miles or so of the parent star--closer than Mars is to the sun. Two other planets are believed to be present, including one just 1.4 times as massive as Earth.
The presumed Earth-size planet orbits a scant 2 million miles from its star, completing a full orbit, or "year," every 1.18 days. If confirmed with additional observations, this hellish world would be the smallest yet discovered, additional proof that Earth-size planets are falling within the reach of current Earth-based instruments.
An artist's impression of a distant solar system with up to seven planets, including a world just slightly bigger than Earth. European Southern Observatory
"We have probably found the system with the most planets known today, coming close to the solar system," Christophe Lovis of the University of Geneva, lead author of a paper reporting the discovery, told CNET in an e-mail exchange. "This means that we are now able to detect very complex systems of low-mass planets, which will help us a lot [in] understanding their diversity. This a step towards answering long-standing questions, such as, how common are habitable planets in the universe?"
As for the presumed Earth-size planet, Lovis said "it is probable that such a low-mass body cannot retain an atmosphere so close to its star. Most likely, this body is like a big melted-lava ball. Hard to imagine, since this is unknown in our solar system."
Over six years, Lovis and his colleagues used a sensitive spectrograph mounted on the European Southern Observatory's 3.6-meter (11.8-foot) telescope at La Silla, Chile, to measure subtle changes in the light from a sun-like star known as HD 10180 in the southern constellation Hydrus.
Located 127 light-years from Earth, HD 10180 wobbles ever so slightly, as it is tugged this way and that by the gravity of a retinue of unseen planets. Over the course of 190 observations, astronomers were able to confirm the presence of at least five Neptune-like planets between 13 and 25 times as massive as Earth.
All five worlds orbit HD 10180 at distances ranging from 0.06 and 1.4 times the distance between the Earth and the sun, out to about 130 million miles. The much smaller, yet-to-be-confirmed planet orbits inside the five Neptune-class worlds. A seventh Saturn-class planet is believed to be at a range of 3.4 times the Earth-sun distance, taking six Earth years to complete one orbit.
According to the Extrasolar Planets Encyclopaedia maintained by the Paris Observatory, 488 planets beyond Earth's solar system have been discovered to date. Some 15 solar systems feature at least three planets. A star known as 55 Cancri has five confirmed planets, including two Jupiter-class worlds.
The HD 10180 solar system is unique in that its planets circle the parent star in nearly circular orbits and seem to be positioned according to a relatively simple arithmetic rule that may be "a consequence of the various gravitational interactions that occur between the planets during their evolution," Lovis said.
"It is difficult to say at this point how significant this result is, but it will be very interesting to hear what our theoretician colleagues think of it," he added.
Surprisingly, perhaps, it appears the HD 10180 solar system is gravitationally stable over long time scales, despite the effects of five Neptune-class planets orbiting so close to their star.
"This was not an easy question, and answering it required in-depth dynamical analyses," Lovis said. "When modeling all major effects properly (including effects of general relativity), it turns out that the system is indeed stable over long time scales."
He said additional observations will be needed to pin down the orbit and mass of the innermost, Earth-class planet.
"We will dedicate some more telescope nights to observe the improve the coverage of the 1.18-day period," he said of the smaller planet. "At the moment, we are suffering from the fact that we take one single data point per night, which makes it difficult to be sure about a 1.18-day period. I expect that we will make progress on this system within a year or so."
The observations are extremely difficult. The gravitational tug of the low-mass planet amounts to a 1.8 mph wobble in a star 127 light-years away, "which is hard to measure and, if confirmed, would represent a new record in precision," Lovis said.
Autoplay: ON Autoplay: OFF
|
/*
* Copyright 2018 Frangou Lab
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include "BedFile.hpp"
#include "../sam/SamRecord.hpp"
namespace gene {
BedFile::BedFile(const std::string& path,
const std::unique_ptr<CommandLineFlags>& flags,
OpenMode mode)
: AlignmentFile(path, FileType::Bed, flags, mode)
{
switch (mode) {
case OpenMode::Read:
readHeader();
break;
case OpenMode::Write:
writeHeader();
break;
}
}
void BedFile::readHeader()
{
}
void BedFile::writeHeader()
{
out_file_->Write("chrom");
out_file_->Write(',');
out_file_->Write("chromStart");
out_file_->Write(',');
out_file_->Write("chromEnd");
out_file_->Write(',');
out_file_->Write("name");
out_file_->WriteLine();
}
void BedFile::write(const SamRecord& record)
{
out_file_->Write(record.chrom);
out_file_->Write('\t');
out_file_->Write(std::to_string(record.chromStart));
out_file_->Write('\t');
out_file_->Write(std::to_string(record.chromEnd));
out_file_->Write('\t');
out_file_->Write(record.name);
out_file_->WriteLine();
}
SamRecord BedFile::read()
{
std::string line = in_file_->ReadLine();
return SamRecord(line);
}
int64_t BedFile::length() const
{
return IOFile::length();
}
int64_t BedFile::position() const
{
return IOFile::position();
}
bool BedFile::isValidAlignmentFile() const
{
return true;
}
std::string BedFile::strFileType() const
{
return "bed";
}
std::string BedFile::defaultExtension()
{
return "bed";
}
std::vector<std::string> BedFile::extensions()
{
return {"bed"};
}
} // namespace gene
|
US 4884575 A
A cardiac pacemaker pulse generator is adapted to .generate electrical stimuli at a first pacing rate, and to selectively increase the rate to a second higher pacing rate. A timer triggers the rate increase to establish the higher rate as an exercise rate folloing the passage of a preset period of time after the timer is enabled. An external magnet controlled by the patient activates a reed switch to enable the timer to commence timing. The pulse generator is further adapted to respond to a second pass of the magnet over the reed switch after enabling of the timer to thereupon disable the timer before the preset period of time has expired. If the second pass of the magnet occurs after the exercise rate has begun, the element for increasing the rate is disabled to return the pulse generator to the lower pacing rate. The change in pacing rates is made in steps.
1. In combination with an implantable cardiac pacemaker for delivering electrical stimuli to the heart of a patient to pace the heart rate,
said pacemaker comprising:
pulse generator means for selectively producing said electrical stimuli at a fixed resting rate and at a higher exercise rate,
lead means associated with said pulse generator for delivering said stimuli to a selected chamber of the heart, and
timer means for stepping-up said pulse generator means from said resting rate to said exercise rate after an adjustable preset delay following activation of said timer means, said preset delay being of a duration perceptible by the patient; and
external control means for patient initiation of a first command to said pacemaker to activate said timer means.
2. In combination with an implantable cardiac pacemaker for delivering electrical stimuli to the heart of a patient to pace the heart rate, said pacemaker comprising:
pulse generator means for selectively producing said electrical stimuli at a fixed resting rate and a higher exercise rate,
lead means associated with said pulse generator for delivering said stimuli to a selected chamber of the heart, and
delay means for stepping-up said pulse generator means from said resting rate to said exercise rate after an adjustable preset delay following activation of said delay means,
means associated with said pulse generator means and said delay means for maintaining said exercise rate for a predetermined time interval following said preset delay and then returning said pulse generator means to said resting rate; and
an external control means for patient-initiation of a command to said pacemaker to activate said delay means.
3. The combination according to claim 2, wherein said delay means is responsive to a second command initiated by the patient from said external control means at any time after receipt of the first said command and before the expiration of said predetermined time interval, to cancel the activation of said delay means.
4. The combination according to claim 3, wherein the stepping up and returning of said rates at which said stimuli are produced by said pulse generator means is effected gradually.
5. An implantable pulse generator unit for a cardiac pacemaker for use with an external magnet to permit patient-initiated adjustment of pacing rate from a resting rate to an exercise rate and vice versa, said unit comprising:
generator means for generating electrical stimuli at said resting rate,
control means associated with said generator means responsive, when enabled, for controllably increasing the rate at which electrical stimuli are generated from said generator means from said resting rate to said exercise rate, and
timer means responsive to positioning of said external magnet in proximity to said pulse generator unit for enabling said control means an adjustable preset delay period after said positioning, said preset delay period being of a duration perceptible to the patient.
6. An implantable pulse generator unit for a cardiac pacemaker for use with an external magnet to permit patient-initiated adjustment of pacing rate from a resting rate to an exercise rate and vice versa, said unit comprising:
generator means for generating electrical stimuli at said resting rate,
control means associated with said generator means responsive, when enabled, for controllably increasing the rate at which electrical stimuli are generated by said generator means from said resting rate to said exercise rate,
said control means including timing means for maintaining the rate at which electrical stimuli are generated by said generator means at said exercise rate for a predetermined time interval; and
delay means responsive to positioning of said external magnet in proximity to said pulse generator unit for enabling said control means an adjustable preset delay period thereafter.
7. The pulse generator unit of claim 6, wherein said control means automatically returns said generator means to said resting rate following the expiration of said predetermined time interval.
8. The pulse generator unit of claim 7, wherein said control means gradually increases the rate at which electrical stimuli are generated by said generator means from said resting rate to said exercise rate, and gradually returns said generator means to said resting rate following the expiration of said predetermined time interval.
9. The pulse generator unit of claim 6, wherein said delay means is responsive to a repositioning of said external magnet in proximity to said pulse generator unit after said control means has been enabled, for disabling said control means.
10. A cardiac pacemaker pulse generator for generating electrical stimuli to be delivered to the heart of a patient to pace the heart rate, said generator comprising:
means for generating said electrical stimuli at a first pacing rate,
means electrically connected to said stimuli generating means for selectively increasing the rate at which said stimuli are generated to a second higher pacing rate,
timing means for triggering said rate increasing means to increase said first pacing rate to a second higher pacing rate upon passage of an adjustable preselected period of time after said timing means is enabled, said preselected period of time being of a duration perceptible by the patient,
means responsive to a command signal from a patient-activated external device for enabling said timing means to commence timing.
11. The pulse generator according to claim 10, wherein
said enabling means is further responsive to a second command signal after said timing means is enabled, to disable said timing means prior to passage of said preselected period of time.
12. The pulse generator according to claim 10, further including
means responsive to a second command signal while said stimuli are being generated at said second higher pacing rate, for disabling said rate increasing means and thereby returning the rate at which said stimuli are generated by said stimuli generating means to said first pacing rate.
13. The pulse generator according to claim 12, wherein
said rate increasing means is responsive, when disabled, to decrementally reduce the rate at which said stimuli are generated by said stimuli generating means.
14. The pulse generator according to claim 10, wherein
said rate increasing means is responsive to said timing means reaching preset time intervals toward passage of said preselected period of time, for incrementally increasing the rate at which said stimuli are generated by said stimuli generating means in steps as each preset time interval is reached.
15. The method of pacing a pacemaker patient's heart rate using a magnet-controlled implantable pulse generator to adjust the stimulation rate from a resting rate to an exercise rate and vice versa, comprising the steps of
maintaining the stimulation rate of said pulse generator at said resting rate,
initiating a command signal to reset the stimulation rate of said pulse generator to said exercise rate after an adjustable programmed delay period following said command signal, and
returning the stimulation rate of said pulse generator to said resting rate in increments following a predetermined interval of time at said exercise rate.
The present invention relates generally to medical devices, and more particularly to implantable artificial cardiac pacemakers adapted to provide patient-variable stimulation rates appropriate to a condition of exercise by the patient.
The resting heart rate of sinus rhythm, that is, the rate determined by the spontaneously rhythmic electrophysiologic property of the heart's natural pacemaker, the sinus node, is typically in the range from about 65 to about 85 beats per minute (bpm) for adults. Disruption of the natural cardiac pacing and propagation system may occur with advanced age and/or cardiac disease, and is often treated by implanting an artificial cardiac pacemaker in the patient to restore and maintain the resting heart rate to the proper range.
In its simplest form, an implantable pacemaker for treatment of bradycardia (abnormally low resting rate, typically below 60 beats per minute (bpm)) includes an electrical pulse generator powered by a self-contained battery pack, and a catheter lead including at the distal end a stimulating cathodic electrode electrically coupled to the pulse generator. The lead is implanted intravenously to position the cathodic electrode in stimulating relation to excitable myocardial tissue in the selected chamber on the right side of the patient's heart. The pulse generator unit is surgically implanted in a subcutaneous pouch in the patient's chest, and has an integral electrical connector to receive a mating connector at the proximal end of the lead. In operation of the pacemaker, the electrical pulses are delivered (typically, on demand) via the lead/electrode system, including an anodic electrode such as a ring behind the tip for bipolar stimulation or a portion of the pulse generator case for unipolar stimulation, and the body tissue and fluid, to stimulate the excitable myocardial tissue.
Pacemakers may operate in different response modes, such as asynchronous (fixed rate), inhibited (stimulus generated in absence of specified cardiac activity), or triggered (stimulus delivered in presence of specified cardiac activity). Further, present-day pacers range from the simple fixed rate device that offers pacing with no sensing (of cardiac activity) function, to fully automatic dual chamber pacing and sensing functions (so-called DDD pacemakers) which may provide a degree of physiologic pacing by at least a slight adjustment of heart rate according to varying metabolic conditions in a manner akin to the natural pacing of the heart. Thus, some DDD pacemaker patients experience an increased pacing rate with physical exertion, with concomitantly higher cardiac output, and thereby, an ability to handle low levels of exercise. Unfortunately, a significant percentage of the pacemaker patient population, who suffer from atrial flutter, atrial fibrillation or sick-sinus syndrome, for example, cannot obtain the benefit of exercise-responsive pacing with conventional atrial-triggered pacemakers. Moreover, the DDD-type pacemakers are complex and costly to manufacture, which is reflected in a higher price to the patient.
It is a principal object of the present invention to provide a relatively simple and inexpensive pacemaker which provides pacing at a desired resting rate, and which is subject to limited control by the patient to provide a desired exercise rate for a preset period of time following which the pacemaker returns to the resting rate.
Various types of rate responsive pacemakers have been proposed which would sense a physiological parameter that varies as a consequence of physical stress, such as respiration, blood oxygen saturation or blood temperature, or merely detect physical movement, and correspondingly adjust the pacing rate. Many of these rate responsive pacemakers may also be relatively complex, and therefore expensive to the patient.
The present invention is directed toward a low cost pacemaker which can be adjusted at will by the patient, subject to the limited amount of control programmed into the device by the physician for that patient. According to the invention, patient control is manifested by bringing an external magnet into proximity with an implanted reed switch associated with the pacemaker. Of course, limited magnet control has been afforded to the patient in the past for some purposes, such as to enable transtelephonic monitoring of the pacemaker functions. Also, techniques are presently available which permit external adjustment of the stimulation rate of the pacemaker after implantation, as by means of a programming unit available to the physician. For obvious reasons, it is undesirable to give the patient the same latitude to control his pacemaker.
In U.S. Pat. No. 3,623,486, Berkovits disclosed a pacemaker adapted to operate at either of two stimulation rates, and switchable from one to the other by the physician using an external magnet. In this manner, the physician would be able to control the pacer mode and rate according to the needs of the particular patient. The purpose, in part, was to provide a pacemaker which had some adaptability to the patient's requirements. However, once set by the physician, the selected resting rate was maintained for that patient by the implanted pacer.
Another technique for external adjustment of pacing rate by the physician is found in the disclosures of U.S. Pat. No. 3,198.195 to Chardack, and U.S. Pat. No. 3,738,369 to Adams et al. In each, rate control is exercised by inserting a needle through a pacemaker aperture beneath the patient's skin to adjust a mechanism. In the Adams et al. disclosure, the needle is used to change the position of a magnet within the paper to actuate a rate-controlling reed switch.
In U.S. Pat. No. 3,766,928, Goldberg et al. describe an arrangement for continuous adjustment of rate by a physician using an external magnet that cooperates with a magnet attached to the shaft of a rate potentiometer in the implanted pacemaker, to provide the initial setting of pacing rate desirable for the particular patient.
More recent proposals offer the patient limited control over the pacing rate. In U.S. Pat. No. 4,365,633, Loughman et al. disclose a pacemaker programmer which is conditioned by the physician to give the patient the capability to select any of three distinct rates: for sleep, for an awake resting state, and for exercise. The programmer generates a pulsating electromagnetic field, and allows the patient to select any of those three modes with an abrupt change in rate when the coil pod of the programmer is positioned over the implanted pacemaker. It is, of course, necessary to have the programmer at hand in order to change the stimulation rate, and the use of the device in public can be a source of extreme embarassment to the patient.
In U.S. Pat. No. 4,545,380, Schroeppel describes a technique for manual adjustment of rate control contrasted with the activity sensing, automatic rate control disclosed by Dahl in U.S. Pat. No. 4,140,132. According to the Schroeppel patent, a piezoelectric sensor and associated circuitry are combined with the implanted pulse generator of the pacemaker to allow the patient to change from a resting rate to a higher rate by sharp taps on his chest near the site of the piezoelectric sensor. Such an arrangement requires that the sensor be sufficiently sensitive to respond to the patient's sharp taps, and yet be insensitive to the everyday occurrences the patient encounters while undergoing normal activities and which could otherwise result in false triggerings. These include presence in the vicinity of loud noise such as is generated by street traffic, being jostled in a crowd, experiencing bumps and vibrations while riding in a vehicle, and the like. Further, even when controlled in the manner described, this type of switching results in an abrupt, non-physiological change of rate.
Accordingly, it is another object of the present invention to provide a pacemaker which is capable of being controlled externally by the patient to assume exercise and non-exercise rate modes, in a manner that allows discreet and yet reliable control.
Yet another object of the invention is to provide a cardiac pacemaker whose stimulation rate is controllable by and according to a schedule selected by the patient.
Briefly, according to the present invention a cardiac pacemaker is manually controllable by the patient to preset time intervals of operation at a relatively high (exercise) rate and lower (resting) rate according to the patient's own predetermined schedule of exercise and rest. An important aspect of the invention is that the pulse generator may be implemented to undergo an adjustment of stimulation rate from a fixed resting rate of, say, 75 bpm, to a preselected exercise rate of, say, 120 bpm, following a predetermined period of time after activation by the patient using an external magnet, that is, after a predetermined delay following a patient-initiated command signal, and to remain at the higher rate for a preselected time interval. Thus, the patient may effectively "set a clock" in his pacemaker to elevate his heart rate at the time and for the duration of a scheduled exercise session, such as a game of tennis. Moreover, he may activate the pacemaker in this manner in the privacy of his own home well in advance of the exercise session.
According to another aspect of the invention, the pulse generator is implemented to return automatically to the resting rate at the expiration of the preselected exercise rate time interval. Hence, the patient need not carry his magnet with him to readjust the pacer to the resting rate at the completion of the scheduled exercise session. According to this aspect, after operating at the elevated stimulation rate for a time interval preselected to be suitable for the exercise session, say, one hour, the generator resets itself to return to the initial resting rate.
According to another feature of the invention, the rate is incremented and decremented in steps from one rate setting to the other to avoid abrupt changes, and therefore to provide a more physiological rate control than has heretofore been available in manually controlled pacemakers.
A further feature of the invention is that the pulse generator may be activated to disable the exercise rate command at any time after it has been given, including that to produce an early conclusion to an already-commenced exercise session. For example, if a scheduled tennis game or bicycling run is called off by the patient's partner after the patient has programmed in the higher rate, he need merely apply the magnet in proximity to the implanted pulse generator again to cancel the previous command and maintain the fixed resting rate. Similarly, if the exercise session is shortened, the rate may be returned to the resting rate by simply applying the magnet over the pulse generator.
The above and still further objects, aspects, features and attendant advantages of the present invention will become apparent to those of ordinary skill in the field to which the invention applies from a consideration of the following detailed description of a preferred embodiment thereof, taken in conjunction with the accompanying drawing, in which:
FIG. 1 is a block circuit diagram of a pulse generator unit of a cardiac pacemaker according to a preferred embodiment of the invention.
Referring now to FIG. 1, an implantable pulse generator unit 10 includes a pulse generator 12 and batteries 15 housed in a biocompatible metal case 17. Pulse generator 12 is implemented to be rate limited to generate output pulses at rates up to either of two low/high limit rates--for example, 75 pulses per minute (ppm) and 120 ppm, respectively--and to be incremented from the lower rate to the higher rate and decremented from the higher rate to the lower rate under the control of an up/down counter 18 associated with the pulse generator 12 in unit 10. Counter 18 may be set by application of a voltage level to its "up" input to commence counting toward the higher rate, and thereby to incrementally step the pulse repetition frequency up to that rate, and may be reset by application of a voltage level to its "down" input to commence counting toward the lower rate, and thereby decrementally step the pulse repetition frequency down to that rate. This is accomplished under the control of set and reset output voltage levels generated by a flip-flop circuit 21 also housed in case 17. The pulse generator unit 10 also includes a reed switch 25 which is actuable by placement of a magnet 27, external to the skin of the patient in whom the unit 10 is implanted, in proximity to case 17.
Reed switch 25, when actuated, serves to enable a delay timer 29 in unit 10. The delay timer responds to the enabling input to commence timing of its preset time delay interval. At the end of the delay interval, delay timer 29 produces a pulse for application to the flip-flop 21. Subsequent actuation of the reed switch before the timer 29 has timed out serves to disable the timer and reset it in preparation for a subsequent enabling signal from the reed switch. If timer 29 has already timed out before the reed switch is again actuated, the timer will respond to the disabling input, when the reed switch is actuated, to produce another pulse for application to the flip-flop 21. The flip-flop is thereupon reset and produces its reset output voltage level.
The set and reset output voltage levels of flip-flop 21 are also applied respectively to "set" and "reset" inputs of an interval timer 30. Upon being set, the interval timer commences timing out a predetermined time interval, and, at the expiration of that interval, generates a pulse for application to flip-flop 21. Upon being reset, the interval timer 30 is returned to the start of the predetermined time interval in preparation for initiating the timing of that interval on receipt at its "set" input of the next set output voltage level from the flip-flop.
The preset time period of delay timer 29 and the predetermined time interval of interval timer 30 are programmable by the physician according to the desires and needs of the particular patient. If, for example, the patient has a regularly scheduled early morning brisk walking session of one hour with friends, and resides near the starting point of the walk, the time period of the delay timer 29 may be programmed to be fifteen minutes. The time interval of the interval timer 30 is programmed to be one hour in length.
In operation, the pulse generator produces output pulses at the resting rate prescribed (and programmed) by the physician for the particular patient--in this exemplary embodiment, a resting rate of 75 bpm. The pulses are delivered to the stimulating cathodic electrode 35 in the right ventricle of the heart 40 via a lead 42, the reference electrode (anode) and the body tissue and fluids, according to the mode in which the pacemaker is designed to operate.
In the preferred embodiment, the pacemaker continues to operate at that rate unless and until the patient elects to initiate the exercise rate cycle. To do so, the patient places the magnet 27 in proximity to the implanted pulse generator unit 10 at about fifteen minutes prior to the appointed time for the exercise session, as a command to actuate reed switch 25. The patient may then choose to leave the magnet at home or take it along in the glove compartment of his car, since actuation of the reed switch has enabled the delay timer 29 and nothing more need be done by the patient to enable the pacemaker to commence the exercise rate at the expiration of the preset delay period.
Before the end of that period the patient has arrived at the starting point for the exercise session, and at the end of the delay period, the delay timer applies a pulse to flip-flop 21 which responds by generating a set output voltage level. The set voltage is applied to both the "up" input of counter 18 and the "set" input of interval timer 30. Accordingly, the counter commences its count, preferably at a relatively slow rate of, say, ten counts per minute, and correspondingly incrementally steps the pulse generator 12 output rate up to the upper rate limit of 120 ppm, and thereby gradually increases the patient's heart rate from 75 bpm to 120 bpm as the patient commences to exercise. Hence, the patient's heart rate and cardiac output are now at levels adequate for the patient to carry out the exercise session.
The pulse generator continues to supply pulses at the upper rate limit until interval timer 30, which commenced its predetermined time interval with the application of the set input voltage, times out, whereupon the interval timer produces an output pulse which is applied to flip-flop 21 to reset the latter. The flip-flop responds by providing a reset output voltage level for application to the "down" input of counter 18 and the "reset" input of the interval timer. Accordingly, the counter decrementally steps the pulse repetition frequency of the pulse generator down, preferably at the ten pulses per minute rate, to the lower rate limit of 75 ppm corresponding to a heart rate of 75 bpm. In this manner, the patient's heart rate is reduced gradually from the exercise rate to the resting rate at a time commensurate with the end of the exercise session. Also, the resetting of the interval timer by the set output voltage level of the flip-flop assures that the timer is ready to commence timing its predetermined interval on receipt of the next "set" input.
In the event that the exercise session is called off at any time after the delay timer 29 has been enabled and before the interval timer has timed out, the patient need merely place the magnet 27 once again in proximity to the implanted pulse generator unit. If the delay timer has not yet timed out, it is disabled by the actuation of the reed switch, and hence, flip-flop 21 remains reset, interval timer 30 remains reset, counter 18 is at its low count, and pulse generator 12 is at its lower rate limit. If the delay timer has timed out, it produces an output pulse in reponse to the disabling input from the reed switch, thereby resetting the flip-flop, resetting the interval timer, returning counter 18 toward its low count and pulse generator 12 toward its lower rate limit. To that end, delay timer 29 is provided with an internal clock such that, once enabled to time out the delay interval, it cannot be again enabled to do so until the passage of a preselected time interval, which is one hour and fifteen minutes in the present example, unless it has first been disabled during that overall interval. Of course, to cancel the exercise rate, the patient must have the magnet available to issue the second command but, as previously noted, once the delay timer is enabled through actuation of the reed switch the magnet may be kept in a convenient location, such as the glove compartment of the patient's car, to allow cancellation of the exercise rate in private.
Although a presently preferred embodiment has been described herein, it will be evident to those skilled in the art that variations and modifications of the preferred embodiment may be carried out without departing from the spirit and scope of the invention. Accordingly, it is intended that the present invention shall be limited only to the extent required by the appended claims and the applicable rules of law.
|
#include <iostream>
#include <stack>
#include <vector>
void getNeighborsAndPush(std::stack <int> &s, int node);
bool isVisited(std::vector<int> &visitedNode, int node);
void printVector(std::vector <int> &v);
int graph[][7]=
{
{0,1,1,1,0,0,0},
{1,0,0,0,0,1,0},
{1,0,0,1,1,0,1},
{1,0,1,0,1,0,0},
{0,0,1,1,0,0,1},
{0,1,0,0,0,0,0},
{0,0,1,0,1,0,0}
};
int main()
{
std::stack <int> s;
std::vector <int> nodes;
int curNode= 0;
std::vector <int> visitedNode;
for(int i= 0; i < 7; i++)
{
nodes.push_back(i);
}
s.push( nodes[0] );
while( !s.empty() )
{
curNode= s.top();
s.pop();
if (isVisited(visitedNode, curNode) )
continue;
else
{
getNeighborsAndPush(s, curNode);
visitedNode.push_back(curNode);
}
}
printVector(visitedNode);
return 0;
}
void getNeighborsAndPush(std::stack <int> &s, int node)
{
for(int i= 0; i < 7 ;i++) //You should use vector instead of array.
{
if( graph[ node ][i]== 1) s.push( i );
}
}
bool isVisited(std::vector<int> &visitedNode, int node)
{
for(int i=0; i < visitedNode.size();i++)
{
if(node == visitedNode[i])
return true;
}
return false;
}
void printVector(std::vector <int> &v)
{
for(int i= 0; i < v.size(); ++i)
{
std::cout<<" "<< v[i];
}
std::cout<<'\n';
}
|
// Copyright 2017 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "extensions/browser/updater/update_data_provider.h"
#include <map>
#include <memory>
#include <set>
#include <string>
#include <vector>
#include "base/bind.h"
#include "base/files/file_path.h"
#include "base/files/file_util.h"
#include "base/files/scoped_temp_dir.h"
#include "base/run_loop.h"
#include "base/threading/thread_task_runner_handle.h"
#include "components/update_client/update_client.h"
#include "extensions/browser/disable_reason.h"
#include "extensions/browser/extension_prefs.h"
#include "extensions/browser/extension_registry.h"
#include "extensions/browser/extension_system.h"
#include "extensions/browser/extensions_test.h"
#include "extensions/browser/test_extensions_browser_client.h"
#include "extensions/browser/updater/extension_installer.h"
#include "extensions/common/extension_builder.h"
namespace extensions {
namespace {
class UpdateDataProviderExtensionsBrowserClient
: public TestExtensionsBrowserClient {
public:
explicit UpdateDataProviderExtensionsBrowserClient(
content::BrowserContext* context)
: TestExtensionsBrowserClient(context) {}
~UpdateDataProviderExtensionsBrowserClient() override {}
bool IsExtensionEnabled(const std::string& id,
content::BrowserContext* context) const override {
return enabled_ids_.find(id) != enabled_ids_.end();
}
void AddEnabledExtension(const std::string& id) { enabled_ids_.insert(id); }
private:
std::set<std::string> enabled_ids_;
DISALLOW_COPY_AND_ASSIGN(UpdateDataProviderExtensionsBrowserClient);
};
class UpdateDataProviderTest : public ExtensionsTest {
public:
using UpdateClientCallback = UpdateDataProvider::UpdateClientCallback;
UpdateDataProviderTest() {}
~UpdateDataProviderTest() override {}
void SetUp() override {
SetExtensionsBrowserClient(
std::make_unique<UpdateDataProviderExtensionsBrowserClient>(
browser_context()));
ExtensionsTest::SetUp();
}
protected:
ExtensionSystem* extension_system() {
return ExtensionSystem::Get(browser_context());
}
ExtensionRegistry* extension_registry() {
return ExtensionRegistry::Get(browser_context());
}
// Helper function that creates a file at |relative_path| within |directory|
// and fills it with |content|.
bool AddFileToDirectory(const base::FilePath& directory,
const base::FilePath& relative_path,
const std::string& content) const {
const base::FilePath full_path = directory.Append(relative_path);
if (!base::CreateDirectory(full_path.DirName()))
return false;
int result = base::WriteFile(full_path, content.data(), content.size());
return (static_cast<size_t>(result) == content.size());
}
void AddExtension(const std::string& extension_id,
const std::string& version,
bool enabled,
int disable_reasons,
Manifest::Location location) {
base::ScopedTempDir temp_dir;
ASSERT_TRUE(temp_dir.CreateUniqueTempDir());
ASSERT_TRUE(base::PathExists(temp_dir.GetPath()));
base::FilePath foo_js(FILE_PATH_LITERAL("foo.js"));
base::FilePath bar_html(FILE_PATH_LITERAL("bar/bar.html"));
ASSERT_TRUE(AddFileToDirectory(temp_dir.GetPath(), foo_js, "hello"))
<< "Failed to write " << temp_dir.GetPath().value() << "/"
<< foo_js.value();
ASSERT_TRUE(AddFileToDirectory(temp_dir.GetPath(), bar_html, "world"));
ExtensionBuilder builder;
builder.SetManifest(DictionaryBuilder()
.Set("name", "My First Extension")
.Set("version", version)
.Set("manifest_version", 2)
.Build());
builder.SetID(extension_id);
builder.SetPath(temp_dir.GetPath());
builder.SetLocation(location);
auto* test_browser_client =
static_cast<UpdateDataProviderExtensionsBrowserClient*>(
extensions_browser_client());
if (enabled) {
extension_registry()->AddEnabled(builder.Build());
test_browser_client->AddEnabledExtension(extension_id);
} else {
extension_registry()->AddDisabled(builder.Build());
ExtensionPrefs::Get(browser_context())
->AddDisableReasons(extension_id, disable_reasons);
}
const Extension* extension =
extension_registry()->GetInstalledExtension(extension_id);
ASSERT_NE(nullptr, extension);
ASSERT_EQ(version, extension->VersionString());
}
const std::string kExtensionId1 = "adbncddmehfkgipkidpdiheffobcpfma";
const std::string kExtensionId2 = "ldnnhddmnhbkjipkidpdiheffobcpfmf";
};
TEST_F(UpdateDataProviderTest, GetData_NoDataAdded) {
scoped_refptr<UpdateDataProvider> data_provider =
base::MakeRefCounted<UpdateDataProvider>(nullptr);
const auto data =
data_provider->GetData(ExtensionUpdateDataMap(), {kExtensionId1});
EXPECT_EQ(0UL, data.size());
}
TEST_F(UpdateDataProviderTest, GetData_EnabledExtension) {
scoped_refptr<UpdateDataProvider> data_provider =
base::MakeRefCounted<UpdateDataProvider>(browser_context());
const std::string version = "0.1.2.3";
AddExtension(kExtensionId1, version, true,
disable_reason::DisableReason::DISABLE_NONE, Manifest::INTERNAL);
ExtensionUpdateDataMap update_data;
update_data[kExtensionId1] = {};
std::vector<std::string> ids({kExtensionId1});
const auto data = data_provider->GetData(update_data, ids);
ASSERT_EQ(1UL, data.size());
EXPECT_EQ(version, data[0]->version.GetString());
EXPECT_NE(nullptr, data[0]->installer.get());
EXPECT_EQ(0UL, data[0]->disabled_reasons.size());
EXPECT_EQ("internal", data[0]->install_location);
}
TEST_F(UpdateDataProviderTest, GetData_EnabledExtensionWithData) {
scoped_refptr<UpdateDataProvider> data_provider =
base::MakeRefCounted<UpdateDataProvider>(browser_context());
const std::string version = "0.1.2.3";
AddExtension(kExtensionId1, version, true,
disable_reason::DisableReason::DISABLE_NONE,
Manifest::EXTERNAL_PREF);
ExtensionUpdateDataMap update_data;
auto& info = update_data[kExtensionId1];
info.is_corrupt_reinstall = true;
info.install_source = "webstore";
const auto data = data_provider->GetData(update_data, {kExtensionId1});
ASSERT_EQ(1UL, data.size());
EXPECT_EQ("0.0.0.0", data[0]->version.GetString());
EXPECT_EQ("webstore", data[0]->install_source);
EXPECT_EQ("external", data[0]->install_location);
EXPECT_NE(nullptr, data[0]->installer.get());
EXPECT_EQ(0UL, data[0]->disabled_reasons.size());
}
TEST_F(UpdateDataProviderTest, GetData_DisabledExtension_WithNoReason) {
scoped_refptr<UpdateDataProvider> data_provider =
base::MakeRefCounted<UpdateDataProvider>(browser_context());
const std::string version = "0.1.2.3";
AddExtension(kExtensionId1, version, false,
disable_reason::DisableReason::DISABLE_NONE,
Manifest::EXTERNAL_REGISTRY);
ExtensionUpdateDataMap update_data;
update_data[kExtensionId1] = {};
const auto data = data_provider->GetData(update_data, {kExtensionId1});
ASSERT_EQ(1UL, data.size());
EXPECT_EQ(version, data[0]->version.GetString());
EXPECT_NE(nullptr, data[0]->installer.get());
ASSERT_EQ(1UL, data[0]->disabled_reasons.size());
EXPECT_EQ(disable_reason::DisableReason::DISABLE_NONE,
data[0]->disabled_reasons[0]);
EXPECT_EQ("external", data[0]->install_location);
}
TEST_F(UpdateDataProviderTest, GetData_DisabledExtension_UnknownReason) {
scoped_refptr<UpdateDataProvider> data_provider =
base::MakeRefCounted<UpdateDataProvider>(browser_context());
const std::string version = "0.1.2.3";
AddExtension(kExtensionId1, version, false,
disable_reason::DisableReason::DISABLE_REASON_LAST,
Manifest::COMMAND_LINE);
ExtensionUpdateDataMap update_data;
update_data[kExtensionId1] = {};
const auto data = data_provider->GetData(update_data, {kExtensionId1});
ASSERT_EQ(1UL, data.size());
EXPECT_EQ(version, data[0]->version.GetString());
EXPECT_NE(nullptr, data[0]->installer.get());
ASSERT_EQ(1UL, data[0]->disabled_reasons.size());
EXPECT_EQ(disable_reason::DisableReason::DISABLE_NONE,
data[0]->disabled_reasons[0]);
EXPECT_EQ("other", data[0]->install_location);
}
TEST_F(UpdateDataProviderTest, GetData_DisabledExtension_WithReasons) {
scoped_refptr<UpdateDataProvider> data_provider =
base::MakeRefCounted<UpdateDataProvider>(browser_context());
const std::string version = "0.1.2.3";
AddExtension(kExtensionId1, version, false,
disable_reason::DisableReason::DISABLE_USER_ACTION |
disable_reason::DisableReason::DISABLE_CORRUPTED,
Manifest::EXTERNAL_POLICY_DOWNLOAD);
ExtensionUpdateDataMap update_data;
update_data[kExtensionId1] = {};
const auto data = data_provider->GetData(update_data, {kExtensionId1});
ASSERT_EQ(1UL, data.size());
EXPECT_EQ(version, data[0]->version.GetString());
EXPECT_NE(nullptr, data[0]->installer.get());
ASSERT_EQ(2UL, data[0]->disabled_reasons.size());
EXPECT_EQ(disable_reason::DisableReason::DISABLE_USER_ACTION,
data[0]->disabled_reasons[0]);
EXPECT_EQ(disable_reason::DisableReason::DISABLE_CORRUPTED,
data[0]->disabled_reasons[1]);
EXPECT_EQ("policy", data[0]->install_location);
}
TEST_F(UpdateDataProviderTest,
GetData_DisabledExtension_WithReasonsAndUnknownReason) {
scoped_refptr<UpdateDataProvider> data_provider =
base::MakeRefCounted<UpdateDataProvider>(browser_context());
const std::string version = "0.1.2.3";
AddExtension(kExtensionId1, version, false,
disable_reason::DisableReason::DISABLE_USER_ACTION |
disable_reason::DisableReason::DISABLE_CORRUPTED |
disable_reason::DisableReason::DISABLE_REASON_LAST,
Manifest::EXTERNAL_PREF_DOWNLOAD);
ExtensionUpdateDataMap update_data;
update_data[kExtensionId1] = {};
const auto data = data_provider->GetData(update_data, {kExtensionId1});
ASSERT_EQ(1UL, data.size());
EXPECT_EQ(version, data[0]->version.GetString());
EXPECT_NE(nullptr, data[0]->installer.get());
ASSERT_EQ(3UL, data[0]->disabled_reasons.size());
EXPECT_EQ(disable_reason::DisableReason::DISABLE_NONE,
data[0]->disabled_reasons[0]);
EXPECT_EQ(disable_reason::DisableReason::DISABLE_USER_ACTION,
data[0]->disabled_reasons[1]);
EXPECT_EQ(disable_reason::DisableReason::DISABLE_CORRUPTED,
data[0]->disabled_reasons[2]);
EXPECT_EQ("external", data[0]->install_location);
}
TEST_F(UpdateDataProviderTest, GetData_MultipleExtensions) {
// GetData with more than 1 extension.
scoped_refptr<UpdateDataProvider> data_provider =
base::MakeRefCounted<UpdateDataProvider>(browser_context());
const std::string version1 = "0.1.2.3";
const std::string version2 = "9.8.7.6";
AddExtension(kExtensionId1, version1, true,
disable_reason::DisableReason::DISABLE_NONE,
Manifest::EXTERNAL_REGISTRY);
AddExtension(kExtensionId2, version2, true,
disable_reason::DisableReason::DISABLE_NONE, Manifest::UNPACKED);
ExtensionUpdateDataMap update_data;
update_data[kExtensionId1] = {};
update_data[kExtensionId2] = {};
const auto data =
data_provider->GetData(update_data, {kExtensionId1, kExtensionId2});
ASSERT_EQ(2UL, data.size());
EXPECT_EQ(version1, data[0]->version.GetString());
EXPECT_NE(nullptr, data[0]->installer.get());
EXPECT_EQ(0UL, data[0]->disabled_reasons.size());
EXPECT_EQ("external", data[0]->install_location);
EXPECT_EQ(version2, data[1]->version.GetString());
EXPECT_NE(nullptr, data[1]->installer.get());
EXPECT_EQ(0UL, data[1]->disabled_reasons.size());
EXPECT_EQ("other", data[1]->install_location);
}
TEST_F(UpdateDataProviderTest, GetData_MultipleExtensions_DisabledExtension) {
// One extension is disabled.
scoped_refptr<UpdateDataProvider> data_provider =
base::MakeRefCounted<UpdateDataProvider>(browser_context());
const std::string version1 = "0.1.2.3";
const std::string version2 = "9.8.7.6";
AddExtension(kExtensionId1, version1, false,
disable_reason::DisableReason::DISABLE_CORRUPTED,
Manifest::INTERNAL);
AddExtension(kExtensionId2, version2, true,
disable_reason::DisableReason::DISABLE_NONE,
Manifest::EXTERNAL_PREF_DOWNLOAD);
ExtensionUpdateDataMap update_data;
update_data[kExtensionId1] = {};
update_data[kExtensionId2] = {};
const auto data =
data_provider->GetData(update_data, {kExtensionId1, kExtensionId2});
ASSERT_EQ(2UL, data.size());
EXPECT_EQ(version1, data[0]->version.GetString());
EXPECT_NE(nullptr, data[0]->installer.get());
ASSERT_EQ(1UL, data[0]->disabled_reasons.size());
EXPECT_EQ(disable_reason::DisableReason::DISABLE_CORRUPTED,
data[0]->disabled_reasons[0]);
EXPECT_EQ("internal", data[0]->install_location);
EXPECT_EQ(version2, data[1]->version.GetString());
EXPECT_NE(nullptr, data[1]->installer.get());
EXPECT_EQ(0UL, data[1]->disabled_reasons.size());
EXPECT_EQ("external", data[1]->install_location);
}
TEST_F(UpdateDataProviderTest,
GetData_MultipleExtensions_NotInstalledExtension) {
// One extension is not installed.
scoped_refptr<UpdateDataProvider> data_provider =
base::MakeRefCounted<UpdateDataProvider>(browser_context());
const std::string version = "0.1.2.3";
AddExtension(kExtensionId1, version, true,
disable_reason::DisableReason::DISABLE_NONE,
Manifest::COMPONENT);
ExtensionUpdateDataMap update_data;
update_data[kExtensionId1] = {};
update_data[kExtensionId2] = {};
const auto data =
data_provider->GetData(update_data, {kExtensionId1, kExtensionId2});
ASSERT_EQ(2UL, data.size());
ASSERT_NE(nullptr, data[0]);
EXPECT_EQ(version, data[0]->version.GetString());
EXPECT_NE(nullptr, data[0]->installer.get());
EXPECT_EQ(0UL, data[0]->disabled_reasons.size());
EXPECT_EQ("other", data[0]->install_location);
EXPECT_EQ(nullptr, data[1]);
}
TEST_F(UpdateDataProviderTest, GetData_MultipleExtensions_CorruptExtension) {
// With non-default data, one extension is corrupted:
// is_corrupt_reinstall=true.
scoped_refptr<UpdateDataProvider> data_provider =
base::MakeRefCounted<UpdateDataProvider>(browser_context());
const std::string version1 = "0.1.2.3";
const std::string version2 = "9.8.7.6";
const std::string initial_version = "0.0.0.0";
AddExtension(kExtensionId1, version1, true,
disable_reason::DisableReason::DISABLE_NONE,
Manifest::EXTERNAL_COMPONENT);
AddExtension(kExtensionId2, version2, true,
disable_reason::DisableReason::DISABLE_NONE,
Manifest::EXTERNAL_POLICY);
ExtensionUpdateDataMap update_data;
auto& info1 = update_data[kExtensionId1];
auto& info2 = update_data[kExtensionId2];
info1.install_source = "webstore";
info2.is_corrupt_reinstall = true;
info2.install_source = "sideload";
const auto data =
data_provider->GetData(update_data, {kExtensionId1, kExtensionId2});
ASSERT_EQ(2UL, data.size());
EXPECT_EQ(version1, data[0]->version.GetString());
EXPECT_EQ("webstore", data[0]->install_source);
EXPECT_EQ("other", data[0]->install_location);
EXPECT_NE(nullptr, data[0]->installer.get());
EXPECT_EQ(0UL, data[0]->disabled_reasons.size());
EXPECT_EQ(initial_version, data[1]->version.GetString());
EXPECT_EQ("sideload", data[1]->install_source);
EXPECT_EQ("policy", data[1]->install_location);
EXPECT_NE(nullptr, data[1]->installer.get());
EXPECT_EQ(0UL, data[1]->disabled_reasons.size());
}
} // namespace
} // namespace extensions
|
US 5459828 A
A method of producing a raster font from a contour font entailing the steps of deriving font metrics and character metrics of font characters in terms of arbitrary font units; scaling the font characters to a selected size and output resolution (pixels per unit length); altering the thickness of vertical and horizontal strokes of each character to a desired thickness, from the measured font metrics and character metrics, and including a difference applied to the thickness of the strokes by the printer process, to cause the strokes to be close to an integer number of pixels and thickness and to compensate for thinning and thickening which the printing engine might produce; bringing the leading and trailing edges of the characters to integer pixel locations, where such locations are based on and scaling the character between the leading and trailing edges proportionally therebetween, and producing a rasterized font from the altered contour font character.
1. A printer processor implemented method for producing a raster font from a contour font defined by a list of points connected by curves, said raster font suitable for printing on a selected printer having known reproduction characteristics, including the steps of:
a) deriving for a contour font a set of font metrics and character metrics of a character in the font defined in terms of arbitrary font units;
b) scaling a character contour defined in arbitrary font units to a selected size in units of pixels;
c) altering thickness of character strokes by adjusting vertical and horizontal coordinates of each point defining the character contour in directions defined by a vector normal to the character contour at each point, by an amount required to obtain a desired thickness from the measured font metrics and character metrics, and an amount required to add to difference thickness thereto in accordance with the selected printer reproduction characteristics, said alteration amounts together causing the vertical and horizontal strokes to be sufficiently close to an integer number of pixels or half pixels so as to cause subsequent numerical rounding to produce uniform results across the font;
d) grid aligning the contour of each character so that leading and trailing edges, and top and bottom edges of the contour of each character fall on whole or half pixel positions; and
e) applying a rasterization function to the contour to convert each contour font character to a bitmap.
2. The method as defined in claim 1 wherein in said grid alignment step, after aligning said leading and top edges of said contours of each character on a whole pixel position, the length of any lines joining leading and trailing edges, and lines joining top and bottom edges, are rounded to an integer number of whole or half pixels, and the trailing edge and bottom edges are aligned at whole pixel positions.
3. In a printing system for printing on a selected printer having reproduction characteristics known and available as contour font correction data, wherein a font to be printed has a set of predefined font metrics and character metrics for each character in the font defined in terms of arbitrary font units, the method of preparing a contour font defined by a list of points connected by curves, for printing on the selected printer including the ordered steps of:
a) scaling each character in the contour font to a selected print resolution in pixels per unit length;
b) altering thickness of character strokes by adjusting vertical and horizontal coordinates of each point defining the contour of each character to a desired thickness in directions defined by a vector, normal to the character contour at each point, by an amount required to obtain a desired thickness from the measured font metrics and character metrics, and an amount required to add a difference thickness thereto in accordance with the contour font correction data for a particular printer, to cause the vertical and horizontal stroke thickness to approximate an integer number of pixels so as to cause subsequent numerical rounding to produce uniform results across the font;
c) grid aligning the contour of each character so that leading and trailing edges, and top and bottom edges of the contour of each character fall on whole pixel positions; and
d) applying a rasterization function to the contour convert each contour font character to a bitmap.
4. The method as defined in claim 3 wherein in said grid alignment step, after aligning said leading and top edges of said contours of each character on a whole pixel position, the length of any lines joining leading and trailing edges, and lines joining top and bottom edges, are rounded to an integer number of pixels or half pixels, and the trailing edge and bottom edges are aligned at whole pixel positions.
A microfiche Appendix, having 5 fiche and 398 frames, is included herewith.
The present invention relates generally to the production of raster fonts from contour fonts, and more particularly, to a method of producing raster fonts from contour fonts taking into account characteristics of the contour font and the printer system which will ultimately print the font.
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office files or records, but otherwise reserves all rights whatsoever.
Cross reference is made to U.S. patent application Ser. No. 07/416,211 by S. Marshall, entitled "Rapid Halfbitting Stepper", and assigned to the same assignee as the present invention.
U.S. Pat. No. 4,675,830 to Hawkins is incorporated herein by reference for the purposes of background information on contour fonts. U.S. patent application Ser. No. 07/416,211 by S. Marshall, entitled "Rapid Halfbitting Stepper", and assigned to the same assignee as the present invention, is incorporated by reference herein for the purposes of teaching rasterization.
"Contour fonts" is a term that refers to the use of outlines or contours to describe the shapes of characters used in electronic printing. In a contour font, each character shape is represented by one or more closed curves or paths that traces the boundary of the character. The contour is specified by a series of mathematical equations, which may be in any of several forms, the most common being circular arcs, straight lines, and polynomial expressions. The shape of the contour font is that of the ideal design of the character and, generally, does not depend on parameters associated with any printer. Contour fonts are ideal for use as master representations of typefaces.
Bitmap fonts or raster fonts are composed of the actual characters images that will be printed on a page, and are made by scaling contours to the appropriate size, quantizing or sampling them at the resolution of the printer, and filling the interiors of the characters with black bits or pixels. Achieving high quality in this process is difficult, except at very high resolutions, and requires knowledge of both the marking technology and typographic design considerations. Often, a bitmap font is delivered to a printer. There is a separate bitmap font for each size of a font, and sometimes separate fonts for landscape and portrait orientations.
The advantage of a contour font is that it can be scaled to any size and rotated to any angle by simple mathematics. Therefore, a single font suffices to represent all possible printing sizes and orientation, reducing font storage requirements, reducing the cost of font handling.
The difficulty in this approach is in achieving high quality character images during the sampling process which generates the raster characters from the contour masters. If the contour character is simply sampled, there will be random .+-.1 pixel variations in stroke thickness. If the printing process tends to erode black areas (common in write-white laser xerography) characters will be consistently too thin. If the printing process tends to fatten black areas (common in write black laser xerography), characters will be too thick.
At the high resolution employed in phototypesetters, usually greater than 1,000 spi, no special techniques are required for scaling and sampling the contour font to generate a raster font of any size. This is because although simple sampling necessarily has random one-bit errors, such errors are small compared to the size of the character, making errors insignificant. At 300, 400, and 600 spi though, character strokes are only three or four bits thick and each bit is important. The simplistic methods used by typesetter manufacturers are not sufficient.
U.S. Pat. No. 4,675,830 to Hawkins, uses defined points in a contour font that must be grid aligned to pixel positions, but the stem widths or edges are not aligned.
Of particular importance in generating fonts of optimal appearance are maintenance of uniform and correct stroke thickness among characters of a font and on different printing engines, uniform alignment of characters on a baseline, and uniform spacing of characters.
In accordance with the invention, there is provided a method for conversion of contour fonts to bitmap fonts with automatic thickening and thinning of strokes, and snapping of character edges to pixel or half pixel boundaries.
In accordance with the invention, there is provided a method of producing a raster font from a contour font entailing the steps of: first, deriving font metrics and character metrics of font characters in terms of arbitrary font units; scaling the font characters to a selected size and output resolution (pixels per unit length); altering the thickness of vertical and horizontal strokes of each character to a desired thickness, from the measured font metrics and character metrics, and including a difference applied to the thickness of the strokes by the printer process, to cause the strokes to be close to an integer number of pixels and thickness and to compensate for thing and thickening which the printing engine might produce; bringing the leading and trailing edges of the characters to integer pixel locations, where such locations are based on and scaling the character between the leading and trailing edges proportionally therebetween, and producing a rasterized font from the altered contour font character.
These and other aspects of the invention will become apparent from the following description used to illustrate a preferred embodiment of the invention in conjunction with the accompanying drawings in which:
FIG. 1 shows a block diagram of the inventive optimized scaler rasterizer system.
FIGS. 2A-2E illustrate the development of a raster font from a contour font, using the system described in FIG. 1.
With reference to the drawing, where the showing is for the purpose of illustrating an embodiment of the invention and not for the purpose of limiting same, the Figure shows a block diagram of the present invention which will be referred to and described hereinafter.
FIG. 1 shows a block diagram of the contour rasterization process of the present invention. Beginning with a contour font 10, and with a character "H" shown in contour for illustration purposes at FIG. 2A the contour font is analyzed initially at hint generation step 20. At the hint generation, the parameters defining the font are determined, including measurement of the following metrics and character hints:
TABLE 1______________________________________Font Metric Comments______________________________________Cap-height Height of the H, I or similar letterX-Height Height of the lower case xAscender Height of the lower case k, b, or similar letterDescender Position of the bottom of the lower case p or qThickness of Upper Vertical stroke thicknessCase Stems on upper case H or KThickness of Upper Horizontal Stroke onCase Cross-Strokes upper case E or FThickness of Lower Vertical stroke thicknessCase Stems on lower case k or lThickness of Lower Case Horizontal strokeCross-Strokes thickness on the fThickness of AuxiliaryCharacter StemsThickness of AuxiliaryCharacter Cross-StrokesHairline thickness Thickness of the cross bar on the e or the thin part of the o______________________________________
(See, Appendix, page 13, ICFFontIODefs. Mesa)
Character hints are generated for each character and include the following:
TABLE 2______________________________________Character Metric Comments______________________________________Position of all horizontal Left sides of strokes areedges and indications of leading edges and rightwhether each edge is a sides or strokes areleading or trailing edge. trailing edges.Position of all verticaledges and indication ofwhether each edge is aleading or trailing edge.Direction of the normalvector (perpendicular)to the contour at eachcontrol point in thecontour, pointingtoward the whiteregion.______________________________________
At hint generation 20, the font metrics and character hints are computed. Since no special information on the actual character contours, beyond the contours themselves, is required to perform these computations, any font may be accepted as input. Height thickness metrics are obtained either by examining images of specific individual characters or by averaging amongst several characters. Optionally, if these values are supplied externally, that is, the provider of the font provides these values, the external values may be used instead of the computed values. Edge positions are determined by looking for long vertical or horizontal portions of contours. Normal vectors are perpendicular to the contour, and are computed from contour equations and by determining which side of the contour is black and which side is white. For those points required for curve reconstruction, but which are not on the curve, the normals are calculated as if a normal vector extended from the curve through those points.
In the attached Appendix, the source code, in the MESA language of the Xerox Corporation, is provided demonstrating one possible embodiment of the source code to accomplish the described goals. The Mesa programming language operates on a microprocessor referred to as the Mesa microprocessor, which has been well documented, for example, in Xerox Development Environment, Mesa Language Manual, Copyright 1985 Xerox Corporation, Part No. 610E00170. This particular software is derived from the Typefounders product of the Xerox Corporation, Stamford, Conn. The Typefounders product accomplished all these character and font metrics, but did not provide them externally. (See Appendix, pages 67-319,for relevant Typefounder software modules called by software implementing the current invention including: CharacterOpsDefs.mesa, CharacterOpslmplA.mesa, CharacterOpslmpIB.mesa, pages 67-105; ContourOpsDefs.mesa, ContourOpslmplA.mesa, ContourOpslmplB.mesa, ContourOpslmpIC.mesa, ContourOpslmplD.mesa, pages 106-195; FontOpsDefs.mesa, FontOpslmpl.mesa, pages 196-221; ImageOpsDefs.mesa, ImageOpslmplA.mesa, ImageOpslmplB.mesa, pages 222-265 TypefounderUtilsdefs.mesa, TypefounderlmplA.mesa, TypefounderlmpIB.mesa, pages 266-319) Additional software was added, which makes these values available for subsequent processing (See Appendix, page 1, TypeDefs.mesa for translation of the Typfounder data structure; page 36, MetricsDef.mesa, Metricslmpl.mesa, for measurement of font metrics; page 47, EdgeOpsDef. mesa, EdgeOpslmpl.mesa, for measurement of leading and trailing edge position) and performs the perpendiculars calculations (see, Appendix, page 56, NormalOpsdefs.mesa, NormalOpslmpl.mesa). This information is used for creation of a data structure for "hints" (see, Appendix, page 13, ICFFontlODefs. Mesa for creation of hint format for next steps). Of course, while in the Appendix, the various coded algorithms operating on the contour font data for the hint creation step 20 are given in the Mesa language, implementation is easily made in the Unix-based "C" language. The remainder of the system, and the algorithms incorporated will be described in the Appendix in the Unix-based "C" language.
Selecting a contour font for use enables a program that looks for font data, and designates its final position in an output, while calling the various programs forming the steps that will be described further hereinbelow (see, Appendix, page 320, raster.c). The contour font rasterization program herein described is useful on a variety of hardware platforms, attributes of which can be selected for enhanced operation of the system, such as for example, a greater degree of precision in the calculations (the difference between 8 bit calculation and 32 bit calculation). (see, Appendix, page 340, std.h)
At transform step 30, (see, Appendix, page 343, xform.c) the contour font is converted from arbitrary contour font units, which are supplied by the provider of the font, to a particular size, expressed in units of pixels. Typically, contour font units are provided in terms of the contour itself, i.e., the height or size of the contour font is one (1). That is, lengths of characters are placed in terms of the size of the font character itself. These values must be transformed into pixel unit values, or whatever other value is required, e.g. the scaled font may be 30 pixels tall. Additionally, it is at this point that the contour font is rotated for either landscape or portrait mode printing, as required. Rotation and scaling is accomplished in accordance with a previously determined transformation matrix equation 35, which mathematically determines the conversion of the contour font from font measurements to pixel values at a selected orientation which can be used by the printer. The transformed character H is shown at FIG. 2B.
Subsequent to transformation step 30, at thickening or thinning step 40, font characters are thickened or thinned based on requirements of the transformation, and requirements of the printing process. The character contour is adjusted to make the strokes thicker or thinner to compensate for the xerographic or other marking process to follow. There are three components of the thickening or thinning value. The first compensates for xerographic or other imaging effects. That is, if for example, the marking technology will thin strokes by half a pixel, then strokes are thickened by half a pixel in this step. The amount of thickening or thinning specified in the printer profile 50 separately for X and Y directions, and is created at the manufacturer of the printer, and inserted at the printer profile 50. (see, Appendix, page 348, thicken.c)
The second component of thickening, called residual thickening, is applied to insure uniformity of output strokes after the sampling or rasterization step. This amount for horizontal thickening on upper case letters, for example, is equal to the difference between the calculated ideal output vertical stem thickness, which is obtained by scaling the font metric to the proper size, and the result of rounding that thickness off to the actual pixel width which will be obtained after rasterization. This rounding is performed to the nearest whole pixel if half bitting is not enabled and to the nearest half pixel, if half bitting is enabled. There are separate values for horizontal or vertical directions and for upper case, lower case and auxiliary characters.
The third component of thickening and thinning applies only to very small characters, and prevents drop-outs of fine lines. This amount is equal to the difference between the calculated scaled thickness of the hairlines, after thickening by the font thickening steps, and the minimum stroke thickness specified in the printer profile. When applied, this thickening brings fine lines up to the value of the minimum stroke thickness. The value is zero if the hairline is already greater than the minimum stroke thickness. (This process, referred to as "adaptive thickening," is not disclosed in the source code in the Appendix.)
The actual thickening or thinning applied is equal to the sum of these three components. Each component has an independent value in the X and Y directions. The direction to move each contour control point is specified by its normal vector. The thickened character H is shown at FIG. C.
At step 60, the snap function or grid alignment function is applied. The coordinate system of the character is varied in the horizontal direction to move vertical and horizontal edges to positions where pixel boundaries will be after rasterization, i.e., to a whole pixel position. This is to assure uniform stroke thickness in the rasterized character images. The process is to piecewise stretch or shrink the character to force edges to align the pixel boundaries. On the left hand sides of the characters, the left edge of each stroke is moved to the closest pixel boundary, while the right edge of the stroke is moved to the pixel boundary specified by rounding the stroke thickness. This process gives priority to maintaining uniform stroke thickness over absolute stroke position. That is to say, that after the left edge of the character has been moved to a whole pixel position, the thickness of the stroke, or portion of the character, is examined to determine its thickness. The thickness has already been adjusted in the thickness of thinning step, so that it is close to a whole pixel width. Accordingly, the right edge of the character is then moved to the nearest whole pixel, based on rounding the thickness of the pixel, as opposed to moving the right hand side to the nearest pixel. On the right hand sides of characters, the rolls of left and right edges of strokes are reversed. Right edges of strokes are anchored, while left edges are rounded relatively to corresponding right edges. (see, Appendix, page 355, snap.c).
In one variant of this scheme, the positions of left and right index points or width points, which are those points which determine character spacing and are made to coincide in constructing words, are snapped before the vertical edges.
In the vertical direction, snapping is performed to piecewise stretch characters so that positions of baseline, cap-height, x-height, and descender fall on pixel boundaries. Baseline and descender position are treated as bottoms of strokes, that is, anchored, while cap-height and x-height are treated as tops of strokes, computed relative to the baseline. All characters are snapped to all of these positions, ensuring uniform character alignment. After these font metric positions are snapped, horizontal edges are snapped in the same manner as vertical edges, with lower edges of strokes anchored and upper edges snapped relative to the lower edges in the lower half of the character and upper edges of strokes anchored and lower edges snapped relative to the upper edges in the upper half of the character.
In both horizontal and vertical directions, snapping is performed one edge at a time. That is, the first edge is snapped, stretching the coordinate system of the character slightly on one side of the snapped edge and shrinking it slightly on the other side. The second edge is then snapped, with its pre-snapping position perhaps already modified slightly by the first snap. This sequential snapping helps preserve local character features better than simultaneous snapping of all edges does. When the second edge is snapped, its area of influence on the coordinate grid extends only up to the first snapped edge, which stays in place. This process is then repeated for the remainder of the edges. The snapped character H is shown at FIG. 2D.
Once each character in the adjusted contour font has been placed in the grid and appropriately thickened and thinned, the final step is to sample the adjusted contour on discrete grid. This step 70 can optionally produce half bitted output images, as controlled by the printer profile. Light half bitting produces half bitting on curves and diagonals, while heavy half bitting will also produce half bitted vertical and horizontal edges.
Rasterization in a preferred embodiment of this invention is in accordance with the process described in U.S. patent application Ser. No. 07/416,211 by S. Marshall, entitled "Rapid Halfbitting Stepper", and assigned to the present assignee of the present invention. This application is incorporated by reference herein for the purposes of teaching rasterization. (see, Appendix, page 364, step.c and page 368, step.h for rasterization with halfbitting; page 372, bezline.c for stepping around curve; page 396, fill.c for filling). The rasterized character is shown at FIG. 2D.
It will not doubt be appreciated that numerous changes and modifications are likely to occur to those skilled in the art, and it is intended in the appended claims to cover all those changes and modifications which fall within the spirit and scope of the present invention.
|
Are you in compliance?
How Good Ideas Still Lead to Bad Reporting
Scott Taub | January 27, 2015
A number of different roles go into financial reporting. The Securities and Exchange Commission sets the rules requiring financial statements to be published and overseas the other parts of the process. The Financial Accounting Standards Board writes the accounting standards. Corporate accountants develop policies and procedures to apply those standards, implement systems to gather necessary information, and prepare the financial statements themselves.
Audit committees oversee the preparation of financial statements and hire auditors to certify them. The Public Company Accounting Oversight Board writes auditing standards. The auditor applies those standards in performing an audit and issues an audit opinion. The PCAOB inspects the audits. The SEC reviews financial statements and makes inquiries in the interests of identifying opportunities for improvement.
It’s a complicated process, and things can go wrong in many ways; sometimes problems happen even when everybody appears to be doing their job right.
One of those situations has been weighing on me lately. This problem doesn’t result in materially incorrect financial statements, but it does add cost and inefficiency to a process already difficult enough as it is. And because it isn’t clear that anybody is doing anything wrong, it isn’t clear how the problem should be fixed.
To illustrate the problem, I’ll walk through the steps I described above, narrowing in on the inefficiency—to wit, spending time and money on known and immaterial departures from Generally Accepted Accounting Principles.
As everybody knows, the SEC requires that financial statements be prepared in accordance with GAAP in all material respects, and the Commission looks to FASB to define GAAP. FASB identifies the correct accounting for a particular transaction or arrangement, noting always that its standards need not be applied to immaterial items.
Companies frequently implement policies and procedures designed to ensure that material items are accounted for in accordance with GAAP, while immaterial items are instead handled in a simpler (and cheaper) way. Some of the more common examples of this are: (1) expensing minor fixed assets instead of capitalizing and depreciating them; (2) expensing some overhead costs instead of capitalizing them into inventory; and (3) using straight-line interest expense instead of the effective interest method. Many other examples abound and everybody agrees that these are appropriate ways to make financial reporting more efficient.
In the not-too-distant past, auditors generally accepted these “simplifying” accounting policies after a qualitative evaluation that the policies were unlikely to lead to material departures from GAAP. Times have changed.
When a company follows an accounting policy that is inconsistent with GAAP, it is, of course, possible for the financial statements to wind up with a material error—something immaterial when a policy was adopted could become material over time. SEC questions have resulted in restatements in circumstances when a public company did not consider this possibility. Little surprise, then, that the PCAOB is interested as well.
PCAOB auditing standards require that an auditor accumulate and evaluate all mis-statements in financial statements that aren’t “clearly trivial.” Since these policies are departures from GAAP, it therefore seems appropriate that auditors propose adjusting entries like they would for any other mis-statement.
Because of that, auditors today want to know the size of the error caused by these non-GAAP policies before concluding that no further evaluation need be done, so that they can decide whether to propose an adjustment. The qualitative evaluation is now just a starting point.
If an adjustment is proposed because the mis-statement isn’t “clearly trivial,” it then seems reasonable that the proposed adjustment be treated like any other proposed adjustment. Of course, companies generally decide not to record these proposed adjustments, as long the mis-statement isn’t material. This is perfectly understandable. The policy was adopted to save time and money without materially mis-stating the financial statements, and all the subsequent work confirmed that there was no material error.
Auditing standards, however, require that any unrecorded proposed adjustment be communicated to the audit committee. As such, if management doesn’t record the adjustment, the auditor needs to make the audit committee aware of the GAAP departure.
And there’s the rub: These are all logical, understandable positions and actions. And they are leading to wasted time and money.
Quantifying the effects of these policies requires the company to analyze transactions and events that it believes too small to worry about. Even if you can make reasonable estimates, auditors and companies may have difficulty determining what level of internal controls should be applied to information that is being gathered solely to confirm that something is not material, and therefore not important. Regardless, corporate accountants and auditors alike are spending time (and money) on things that everybody believes are not material in the first place—which, of course, undoes some of the benefit of adopting the simpler policy in the first place.
Further inefficiencies occur because audit committees are bothered with things that aren’t the result of anybody making a mistake and have already been determined to have no material effect on the financial statements.
None of this wasted effort can really be traced to anybody failing in their responsibilities. Everybody has acknowledged that GAAP needn’t be applied to immaterial things. The company has designed policies and processes to focus on material items, while allowing immaterial items to be handled more easily. The SEC has only sought to identify situations where management’s processes failed to detect material errors. The PCAOB has only sought to ensure that auditors don’t miss those material errors and that the audit committee is aware of any identified errors. Auditors are simply following professional standards that require them to evaluate departures from GAAP.
Fortunately, many participants in the process have identified this inefficiency and want to do something about it. Unfortunately, it isn’t clear what should be done.
In some cases, FASB is being asked to add guidance that scopes out small items from accounting standards, instead of relying on the general materiality exception. Where such guidance exists, not applying the requirements of the standard to these small items isn’t a deviation from GAAP at all, so no tracking of the effects is necessary.
For example, constituents have asked FASB to scope out “inconsequential” promises to provide goods or services from the requirements of the new revenue recognition standard, amid concern that without such an explicit exception, auditors (and therefore companies) would need to identify and quantify the effects of promises that seem to have little value, such as sending periodic information statements, or answering installation or operational questions by phone. Of course, nobody believes that accounting for such promises would make a material difference in the first place, but the potential need to prove it every year could be a significant undertaking.
Having FASB deal with these matters through the standards might be an elegant solution, if it could work. FASB board members have been receptive to exploring ways to do it. But it puts FASB in the position of trying to write a standard on identifying material items without the benefit of any information on the transactions the standard might be applied to. It also would invite transaction structuring to take advantage of the scope exceptions, and it would only solve the problem one accounting topic at a time. Thus, even if it worked for “inconsequential” revenue items, the problem would continue to exist in myriad other areas.
Since the problem largely resides in the auditing process, a more effective solution ought to be achievable by focusing there. Perhaps the answer is simply focusing on management’s controls for periodically reviewing these “systemic GAAP departures” to ensure they haven’t gotten material. Such controls wouldn’t necessarily need to include quantitative measures in order to be effective. And if the controls are shown to be effective, it seems we should be able to avoid the non-value-added tracking, quantifying, and reporting that is currently going on. Of course, I oversimplify by describing a solution in just a few sentences, but there must be a way to eliminate work that virtually everybody acknowledges is not addressing a significant risk of material mis-statement.
I should cop to my part in this mess. As an SEC official a decade ago, I was asked whether I believed that GAAP departures resulting from policies like these should be considered mis-statements. The answer seemed easy: if you aren’t doing what GAAP requires, you have a mis-statement, even if you were departing from GAAP for a logical reason. I wasn’t terribly concerned when I answered that question way back then, and I still can’t think of a different answer. But the consequences of treating these departures as errors have been far beyond what I had envisioned.
|
Should I Be a Strict or Lenient Parent?
To be strict or not to be strict, that is the question – in fact, it’s the number-one question among child-rearing and education authorities, among teachers and, of course, parents. It’s doubtful that there is a parent who hasn’t at one time or another agonized over this.
There is a widespread uncertainty on how to be at home (or how to come across in the classroom) – tough or soft, to be a strict disciplinarian or a permissivist. Have you noticed, however, that you seldom hear a parent or teacher admit “I am authoritarian” or “I am permissive”? These are terms reserved for those with whom you disagree.
The question, whether to be strict or lenient, never ceases to be debated in books and articles, or at conferences and conventions. Dr. Gordon points out that this question is what social scientists call a “pseudo problem” and how it also is a clear case of “either-or thinking”. Let’s take a look at what he means by that.
Seldom parents or teachers seem to recognize that it is not necessary to make a choice between these two leadership styles. Few adults know it, but there is an alternative to being at either end of the strictness-leniency scale. There is the choice of a third style.
This alternative is being neither authoritarian nor permissive, neither strict nor lenient. Does that mean being somewhere near the middle of the scale–moderately strict or moderately lenient? Not at all. The alternative is not being on the scale at all! How so?
Authoritarian leadership–whether at home or in the classroom–means that the control is in the hands of the adult leader. It has been researched and proven for decades how ineffective maintaining control through power is. Authoritarianism often creates fearful and subservient children and/or rebellion.
Still, no parent or teacher really wants to suffer the chaotic consequences of unrestricted freedom and lawless permissiveness either. It’s also true that most children are uncomfortable with the consequences of permissiveness. Permissive leadership means that control has been “permitted” to be in the hands of the youngsters. Children of permissive parents usually feel guilty about always getting their way. They also feel insecure about being loved, because their inconsiderate behaviors make them feel unlovable.
So what is that third viable alternative to both, authoritarian and permissive adult leadership? It’s what Dr. Gordon in detail describes in his model of parenting, a set of skills and methods known as Parent Effectiveness Training that are geared toward rearing self-disciplined children in a harmonious family climate.
For now, let’s just emphasize that this new approach to relating to youngsters requires a transformation in the way adults perceive children, as well as a shift in the way they treat them. This transformation can be accomplished by learning a few new skills and methods that are applied in everyday life.
This newsletter will describe and examine each of these skills and methods in its future editions and hopefully contribute to you having a more harmonious and peaceful home.
|
Oil and globalization fuel Al Qaeda terror network
President Obama touts the killing of Osama bin Laden as a major blow to Al Qaeda leadership. Mitt Romney says the terrorist network remains a major threat. They're both right. Middle East oil and the forces of globalization continue to fuel Al Qaeda offshoots around the world.
• close
President Obama, key cabinet members, and members of the national security team monitor the mission to get Osama bin Laden at the White House, May 1, 2011. Op-ed contributor Steve Yetiv says combating globalized terror 'starts with targeting the circumstances...that fuel terrorism....[T]he next American president must work to decrease world oil dependence by investing in renewable energy.'
Pete Souza/The White House/AP/file
View Caption
• About video ads
View Caption
Mitt Romney has repeatedly argued that even though Osama Bin Laden is dead, Al Qaeda remains a major threat to American security, while President Obama has described Al Qaeda’s leadership as decimated. Who’s right?
In fact, they are both right. Al Qaeda’s leadership is decimated, and the Obama team deserves much credit. But Al Qaeda has also splintered into affiliates and offshoots that keep bouncing back, like a Whac-A-Mole game. Al Qaeda can be repressed as Mr. Obama suggests, but it is hard for any US president to completely eliminate the terror network – and Americans should know why.
A large part of the reason is oil and globalization.
Recommended:Opinion 8 steps to US energy security
Many people have commented on the link between oil and terror, but what is more interesting is how oil and globalization have worked together to abet terrorism. The overlapping oil and globalization eras have produced circumstances that helped create and now still buttress the Al Qaeda phenomenon.
In the Afghanistan war of the 1980s, both private and government monies from oil-rich Persian Gulf countries supported Osama bin Laden’s “Afghan Arabs,” including their recruitment, housing, communications, and training when they were fighting the invading Russians. These revenues also helped bolster the Taliban, which housed Al Qaeda and still cooperates with it, and helped Pakistan build nuclear capabilities that both Mr. Romney and Obama believe could be stolen by terrorists and militants.
But Al Qaeda could not have become a transnational force without the interconnectedness of globalization. Cultural and economic globalization gave Al Qaeda access to the global highways and side roads it needed to spread around the world, set up shop in dozens of countries, communicate at far distances, and plan large-scale terrorist attacks.
Fortunately, there are ways to combat terror networks. This work starts with targeting the circumstances and causes that fuel terrorism in the first place. For one, the next American president must work to decrease world oil dependence by investing in renewable energy.
But terrorism, while fueled by oil and aided by globalization, has other causes as well. America can also take the lead, with its Arab, European, and Asian allies, to support economic development in the Middle East as an anti-terrorism strategy. The US must also continue to help these nations fight poverty, improve employment opportunities, develop civil society, and broaden education.
The next administration will need to remain vigilant in fighting terrorism, but the best defense in this case is a strong offense.
Steve Yetiv is a professor of political science at Old Dominion University in Norfolk, Va. He is the author of “The Petroleum Triangle: Oil, Globalization, and Terror” and “The Absence of Grand Strategy: The United States in the Persian Gulf.”
|
Two Important Ways Cloud ERP Enables IIoT in Manufacturing
Your manufacturing processes are producing more than just marketable items that generate revenue. They’re also producing vast amounts of data.
Until recently, manufacturers could only perform the most basic data analysis. Today, however, analytics capabilities combined with more opportunities to capture machine and production data present huge opportunities to drive efficiency. The growth of inexpensive, embedded sensors, the plummeting cost of employee wearable devices, and the availability of analytics platforms give manufacturers the power to connect, collect, and act on data like never before. This is the age of the Industrial Internet of Things (IIoT).
The data generated by IIoT can revolutionize the way manufacturers run their business. As McKinsey has observed:
“In manufacturing, operations managers can use advanced analytics to take a deep dive into historical process data, identify patterns and relationships among discrete process steps and inputs, and then optimize the factors that prove to have the greatest effect on yield.”
In other words, data can help you boost your output—but only if you have the right foundation to handle it. And simply implementing technology isn’t enough—you have to make sure it can scale, cost-effectively, to support the even more massive volumes of data your operations will no doubt produce in the future.
Daunted by the thought of building, expanding, and maintaining their own data centers, many manufacturers are turning to the cloud. There’s a good reason for this: the cloud provides unprecedented computing power, scale, and rapid innovation. And cloud ERP systems are the most effective and affordable way to connect massive data stores, mine them for useful insights, and translate these insights into greater operational efficiency. With cloud ERP, you get the connectivity and scalability you need to take full advantage of IIoT and big data.
How Cloud ERP Provides Connectivity
One of the biggest obstacles to data mining is the good old-fashioned data silo. Huge volumes of data are often accessible only to specific teams, but not to everyone in the organization who might need to aggregate them with other data volumes. It is through this aggregation that the best insights are often revealed.
As Cindy Jutras, president of the consulting firm Mint Jutras, points out in her article, IoT and the Connected Manufacturer:
“[T]he concept of collecting massive volumes of data from manufacturing processes is not really new. Manufacturers have had sensors and automated data collection (ADC) devices operating on their plant floors for decades now. But that’s also been the problem. The data never really got off the plant floor. All too often it simply sat out there, disconnected from other enterprise data, not reaching its full potential. It’s time we start connecting all the dots. But of course you can’t do that without the right technology.”
Within the open and connective fabric of the cloud, manufacturers can easily implement applications and data volumes that connect with their machines and sensors. But they’ll still need software that can help them not only find the right data, but also add value to it.
That hub should be a reliable cloud ERP system that lets the company’s brightest minds go in and analyze data from any desktop, laptop, or mobile device they choose. Cloud ERP sits between all entities and centralizes data collection and capture. This eliminates data silos, which create blockades to data mining.
How Cloud ERP Delivers Scalability to Support Data Growth
As we mentioned, your data volumes will only become larger as time passes. Legacy IT infrastructures are struggling to keep pace with today’s rapid increase in data. The cloud embraces it.
Highly elastic cloud technology eliminates concerns about storage and computing limitations by providing the scalability to support and analyze modern manufacturing data. As machines continue to generate staggering amounts of data, you’ll never have to worry about increasing database capacity, building new data centers, or exceeding available storage and computing power.
As you add more machines to your network, the cloud will support your innovation. Cloud systems are designed to adapt quickly to evolving technology. This enables you to connect and test new IIoT technologies at will.
As you collect huge amounts of data from machines, wearables, and computers, a cloud ERP system will keep these massive data stores at your fingertips and help you navigate them to find the insights you’re looking for. Cloud ERP is built to provide the computational power you need—seamlessly—through any browser.
Behind every successful use of big data in the manufacturing sector is a cloud ERP system that can help decision-makers find and analyze data stores to unearth the insights hidden within. Learn more about how the Plex Manufacturing Cloud can increase your manufacturing intelligence and help you make more informed, data-driven decisions.
|
#include <iostream>
#include <algorithm>
#include <vector>
#include <bitset>
using namespace std;
typedef int64_t i64;
typedef int8_t i8;
typedef uint8_t u8;
typedef int32_t i32;
typedef uint32_t u32;
struct P { i64 x, y; };
static constexpr i64 LU_SIZE = 10;
static const i64 LU[LU_SIZE][LU_SIZE] = {
{0, 3, 2, 3, 2, 3, 4, 5, 4, 5},
{3, 2, 1, 2, 3, 4, 3, 4, 5, 6},
{2, 1, 4, 3, 2, 3, 4, 5, 4, 5},
{3, 2, 3, 2, 3, 4, 3, 4, 5, 6},
{2, 3, 2, 3, 4, 3, 4, 5, 4, 5},
{3, 4, 3, 4, 3, 4, 5, 4, 5, 6},
{4, 3, 4, 3, 4, 5, 4, 5, 6, 5},
{5, 4, 5, 4, 5, 4, 5, 6, 5, 6},
{4, 5, 4, 5, 4, 5, 6, 5, 6, 7},
{5, 6, 5, 6, 5, 6, 5, 6, 7, 6},
};
static constexpr i64 TABLE_SIZE = 100;
static i64 table[TABLE_SIZE+4][TABLE_SIZE+4];
i64 testTableHelper()
{
for(int y = 0; y < TABLE_SIZE+4; y++)
for(int x = 0; x < TABLE_SIZE+4; x++)
table[y][x] = 0x7FFF'FFFF'FFFF'FFFF;
table[0+2][0+2] = 0;
vector<P> q = {{0,0}};
while(not q.empty())
{
static const P dp[] = {
{-2, -1},
{-2, +1},
{-1, -2},
{-1, +2},
{+1, -2},
{+1, +2},
{+2, -1},
{+2, +1},
};
const P p = q.back();
q.pop_back();
const i64 c0 = table[p.y+2][p.x+2];
for(int i = 0; i < 8; i++) {
const P np{ p.x+dp[i].x, p.y+dp[i].y };
if(np.x >= TABLE_SIZE+2 || np.y >= TABLE_SIZE+2 || np.x < -2 || np.y < -2)
continue;
i64& c1 = table[np.y+2][np.x+2];
if(c0+1 < c1) {
c1 = c0+1;
q.push_back(np);
}
}
}
};
i64 cost(P a, P b)
{
P c { abs(a.x-b.x), abs(a.y-b.y) };
i64 res = 0;
if(c.x > c.y) {
const i64 i = min(
min(c.y, c.x-c.y),
max(0L, max((c.x-LU_SIZE+2)/2, (c.y-LU_SIZE+1)/1))
);
c.x -= 2*i;
c.y -= i;
res += i;
/*while(c.y > 0 && c.x > c.y && (c.x >= LU_SIZE || c.y >= LU_SIZE)) {
c.x -= 2;
c.y -= 1;
res += 1;
}*/
}
else {
const i64 i = min(
min(c.x, c.y-c.x),
max(0L, max((c.x-LU_SIZE+1)/1, (c.y-LU_SIZE+2)/2))
);
c.x -= i;
c.y -= 2*i;
res += i;
/*while(c.x > 0 && c.y > c.x && (c.x >= LU_SIZE || c.y >= LU_SIZE)) {
c.x -= 1;
c.y -= 2;
res += 1;
}*/
}
if(c.x >= LU_SIZE && c.y >= LU_SIZE) {
const i64 i = min(
max(0L, max((c.x-LU_SIZE+3)/3, (c.y-LU_SIZE+3)/3)),
min(c.x/3, c.y/3)
);
c.x -= 3*i;
c.y -= 3*i;
res += 2*i;
/*while(c.x > 0 && c.y > 0 && (c.x >= LU_SIZE || c.y >= LU_SIZE)) {
c.x -= 3;
c.y -= 3;
res += 2;
}*/
}
if(c.x >= LU_SIZE) {
const i64 i = (c.x - LU_SIZE) / 4 + 1;
c.x -= 4*i;
res += 2*i;
}
else if(c.y >= LU_SIZE) {
const i64 i = (c.y - LU_SIZE) / 4 + 1;
c.y -= 4*i;
res += 2*i;
}
res += LU[c.y][c.x];
return res;
}
i64 testTable()
{
//testTableHelper();
for(int y = 0; y < TABLE_SIZE; y++)
{
for(int x = 0; x < TABLE_SIZE; x++)
{
const i64 myCost = cost({0, 0}, {x, y});
const i64 rightCost = table[y+2][x+2];
if(myCost != rightCost)
{
printf("Error in {%d, %d}\n", x, y);
printf("Has %ld, should have %ld\n", myCost, rightCost);
}
//printf("%3ld", rightCost);
}
//printf("\n");
}
}
i64 hungarianAlgorithm(const i64* costs, i32 n)
{
static i64 t[16*16];
for(i32 i = 0; i < n*n; i++)
t[i] = costs[i];
// rows pass
for(i32 y = 0; y < n; y++) {
i64 minVal = t[y*n];
for(i32 x = 1; x < n; x++)
minVal = min(minVal, t[x + y*n]);
for(i32 x = 0; x < n; x++)
t[x + y*n] -= minVal;
}
// cols pass
for(i32 x = 0; x < n; x++) {
i64 minVal = t[x];
for(i32 y = 1; y < n; y++)
minVal = min(minVal, t[x + y*n]);
for(i32 y = 0; y < n; y++)
t[x + y*n] -= minVal;
}
// count zeros in each row/col
static i32 zeros[32];
for(i32 y = 0; y < n; y++) {
zeros[y] = 0;
for(i32 x = 0; x < n; x++)
if(t[x + y*n] == 0)
zeros[y]++;
}
for(i32 x = 0; x < n; x++) {
zeros[n+x] = 0;
for(i32 y = 0; y < n; y++)
if(t[x + y*n] == 0)
zeros[n+x]++;
}
while(true)
{
//i32 zeros2[32];
//for(i32 i = 0; i < 2*n; i++)
// zeros2[i] = zeros[i];
bitset<32> crossed = 0;
/*{
for(i32 y = 0; y < n; y++) {
for(i32 x = 0; x < n; x++) {
if(t[x + n*y] == 0 && !crossed[n+x] && !crossed[y]) {
if(zeros[n+x] > zeros[y])
crossed[n+x] = true;
else
crossed[y] = true;
}
}
}
}*/
// find min number of zero crossings
//while(crossed.count() < n)
{
// assign rows that have one zero
bitset<16*16> assigned = 0;
bitset<32> crossed2 = 0;
for(i32 y = 0; y < n; y++) {
i32 countZeros = 0;
for(i32 x = 0; x < n; x++)
if(t[x + n*y] == 0 && !crossed2[n+x])
countZeros++;
if(countZeros == 1)
for(i32 x = 0; x < n; x++) {
if(t[x + n*y] == 0 && !crossed2[n+x]) {
crossed2[n+x] = true;
crossed2[y] = true;
assigned[x + n*y] = true;
break;
}
}
}
// assign cols that have one zero
for(i32 x = 0; x < n; x++) {
i32 countZeros = 0;
for(i32 y = 0; y < n; y++)
if(t[x + n*y] == 0 && !crossed2[n+x] && !crossed2[y])
countZeros++;
if(countZeros == 1)
for(i32 y = 0; y < n; y++) {
if(t[x + n*y] == 0 && !crossed2[n+x] && !crossed2[y]) {
crossed2[n+x] = true;
crossed2[y] = true;
assigned[x + n*y] = true;
break;
}
}
}
if(assigned.none())
{
[&]{
for(int y = 0; y < n; y++)
for(int x = 0; x < n; x++) {
if(t[x + y*n] == 0) {
crossed2[n+x] = true;
crossed2[y] = true;
assigned[x + n*y] = true;
}
}
}();
}
// Mark one having no assignments
bitset<32> marked = 0;
bitset<32> justMarkedRows = 0;
bitset<32> justMarkedCols = 0;
for(i32 y = 0; y < n; y++) {
if(!crossed2[y]) {
justMarkedRows[y] = true;
marked[y] = true;
break;
}
}
do {
// Mark columns having zeros in newly marked rows
justMarkedCols = 0;
for(i32 x = 0; x < n; x++) {
for(i32 y = 0; y < n; y++) {
if(justMarkedRows[y] && t[x + n*y] == 0 && !crossed2[y]) {
justMarkedCols[x] = !marked[n+x] && true;
break;
}
}
}
// Mark rows having assignments in newly marked columns
justMarkedRows = 0;
for(i32 y = 0; y < n; y++) {
for(i32 x = 0; x < n; x++) {
if(justMarkedCols[x] && assigned[x + n*y]) {
justMarkedRows[y] = !marked[y] && true;
break;
}
}
}
marked |= justMarkedRows;
marked |= justMarkedCols << n;
} while(justMarkedRows.any() || justMarkedCols.any());
// cross unmarked rows and marked cols
for(i32 i = 0; i < n; i++)
crossed[i] = !marked[i];
for(i32 i = 0; i < n; i++)
crossed[n+i] = marked[n+i];
}
if(crossed.count() == n) {
i64 res = 0;
for(i32 i = 0; i < n; i++)
{
i64 minZeros = 0x7FFF'FFFF'FFFF'FFFF;
i64 minZerosInd = -1;
for(i32 j = 0; j < 2*n; j++)
if(zeros[j] > 0 && zeros[j] < minZeros) {
minZeros = zeros[j];
minZerosInd = j;
}
{
i32 j = minZerosInd;
zeros[j] = 0;
if(j < n) { // row
i32 x;
for(x = 0; x < n; x++) {
if(zeros[x+n] && t[x + n*j] == 0) {
zeros[x+n] = 0;
res += costs[x + n*j];
break;
}
}
for(i32 y = 0; y < n; y++)
if(y != j && t[x + n*y] == 0)
zeros[y]--;
for(x++; x < n; x++)
if(zeros[x+n] && t[x + n*j] == 0)
zeros[x+n]--;
}
else { // col
j -= n;
i32 y;
for(y = 0; y < n; y++) {
if(zeros[y] && t[j + n*y] == 0) {
zeros[y] = 0;
res += costs[j + n*y];
break;
}
}
for(i32 x = 0; x < n; x++)
if(x != j && t[x + n*y] == 0)
zeros[x+n]--;
for(y++; y < n; y++)
if(zeros[y] && t[j + n*y] == 0)
zeros[y]--;
}
}
}
return res;
}
else { // crossed.count() < n
i64 valMin = 0x7FFF'FFFF'FFFF'FFFF;
for(i32 y = 0; y < n; y++)
for(i32 x = 0; x < n; x++)
{
if(!crossed[y] && !crossed[x+n] && t[x+y*n] < valMin) {
valMin = t[x+y*n];
}
}
for(i32 y = 0; y < n; y++)
for(i32 x = 0; x < n; x++)
{
if(!crossed[y] && !crossed[x+n]) {
t[x+y*n] -= valMin;
if(t[x+y*n] == 0) {
zeros[y]++;
zeros[x+n]++;
}
}
else if(crossed[y] && crossed[x+n]) {
if(t[x+y*n] == 0) {
zeros[y]--;
zeros[x+n]--;
}
t[x+y*n] += valMin;
}
}
}
}
}
int main()
{
static P hh[16];
static P tt[16];
static i64 mm[16*16];
//testTable();
i64 n;
cin >> n;
i64 iCase = 1;
while(n)
{
for(i64 i = 0; i < n; i++)
cin >> hh[i].x >> hh[i].y;
for(i64 i = 0; i < n; i++)
cin >> tt[i].x >> tt[i].y;
i64 end = 0;
for(i64 i = 0; i < n; i++)
for(i64 j = 0; j < n; j++)
{
mm[end] = cost(hh[i], tt[j]);
//printf("%ld %ld: %ld\n", i, j, mm[end]);
end++;
}
const i64 res = hungarianAlgorithm(mm, n);
cout << iCase << ". " << res << endl;
iCase++;
cin >> n;
}
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.