text
stringlengths
314
267k
labels
int64
0
29
English, , 397kb The OECD is carrying out projects that can help in the planning and design of future educational facilities – exploring trends in education and studying innovative learning environments. English, , 2,501kb Spanish, , 53kb English, , 678kb In presenting this case study of an innovative school building in Scotland, the author describes its unique design features, conveys the viewpoints of the users, client and design team, and reveals the lessons learned. English, , 417kb Šoštanj Primary School offers a learning process which can enrich traditional forms of schooling. It demonstrates how a school, including its infrastructure, can influence family life and the environment, creating new social patterns and a local identity. This report includes an overview of the Netherlands' tertiary education system; an account of trends and developments; an analysis of the strengths and challenges in tertiary education in the Netherlands; and recommendations for future policy development. Stimulating innovative and growth-oriented entrepreneurship is a key economic and societal challenge to which universities and colleges have much to contribute. This book examines the role that higher education institutions are currently playing through teaching entrepreneurship and transferring knowledge and innovation to enterprises and discusses how they should develop this role in the future. The key issues, approaches and trends English, , 1,008kb This Country Background Report for Chile was prepared by Daniel Uribe and Juan Salamanca of the Higher Education Division at the Ministry of Education of Chile as an input to the OECD Thematic review of Tertiary Education. Spanish, , 1,834kb English, , 1,879kb The purpose of this activity is to provide policymakers with options for developing systems to recognise non-formal and informal learning; to effectively implement the agenda; and determine under what conditions recognition of non-formal and informal learning can be beneficial for all.
26
Tecnalia works on the development of semantic descriptors applied to collaboration indicators on e-learning platforms. E-learning platforms are becoming particularly relevant now that training is a key factor for people, and both companies and universities or training centres are aware that they are going to be an essential training pillar in the future, thanks to their flexibility, dispersion and adaptation to the different study rates of students. One of the key aspects is the potential of collaborative learning between students; the ability to resolve a problem between several people, divide and spread out the work and learn from it, has a clear advantage over individual learning. It is essential to be able to measure the degree of collaboration of each student, and have transferable metrics and indicators so that any e-learning platform can monitor the degree of collaboration of its students. In this way, this platform can generate recommendations to encourage collaboration between its members, or correct inappropriate trends in a collaborative learning process. Artificial Intelligence techniques are used to obtain this data and hence generate the opportune recommendations: Machine Learning and Data Mining, which provide metrics and indicators that are transferable to any e-learning programme, through semantic techniques that can be used on any learning platform for adaptive and recommendation purposes. Explore further: 85 college students tried to draw the Apple logo from memory: 84 failed
26
21st Century Skills | News American Students Ahead of the Curve on PISA Problem Solving Scores Take the PISA Problem-Solving Test OECD has made a handful of sample questions from the problem-solving test available to the public. The questions are computer-based and require interaction with objects on the screen. Visit OECD.org. No, this isn't an April Fool's joke. There's actually good news about American education coming out of the Program for International Student Assessment (PISA), the triennial international assessment that ranks countries based on their students' proficiency in math, science and language arts (as measured by standardized tests). In data released today from the 2012 international assessment, on average 15-year-old students from the United States showed a greater degree of proficiency in problem solving, in particular on tasks for which they are required to uncover information in order to solve the problem, than their peers from other countries. U.S. students, with an average score of 508, didn't rank highest in problem-solving — that distinction went to Singapore — but they did beat the average proficiency score (500) of the 85,000 students who participated in this particular assessment. (Those students represented 44 nations that participated in the 2012 assessment, most of whom are members of the Organization for Economic Cooperation and Development, also known as OECD.) The actual numbers, which can be found on nces.ed.gov, show that about four-fifths of American 15-year-olds are at or above minimal proficiency (proficiency level 2) in problem solving, with the remaining 18.2 percent below that level. The numbers break down as follows: - 5.7 percent of American students were below proficiency level 1 versus the international average of 8.2 percent; - 12.5 percent were at level 1, slightly below the international average of 13.2 percent; - 22.8 percent were at level 2, the minimum level considered "proficient," compared with 22 percent internationally; - 27 percent were at level 3 compared with 25.6 percent internationally; - 20.4 percent were at level 4 compared with 19.6 percent internationally; - 8.9 percent were at level 5 (tied with the international average); and - 2.7 percent were at the highest level of proficiency, level 6, just beating the international average of 2.5 percent. According to a report released by OECD, "Fifteen-year-olds in the United States perform strongest on interactive tasks, compared to students of similar overall performance in other countries. Interactive tasks require students to uncover some of the information needed to solve the problem themselves. This suggests that students in the United States are open to novelty, tolerate doubt and uncertainty, and dare to use intuition to initiate a solution." On the down side, compared with the highest-performing nations (Singapore, Korea and Japan), the largest gaps were seen in "tasks where students must select, organi[z]e and integrate the information and feedback received in order to represent and formulate their understanding of the problem." Performance of American boys and girls was roughly equal (a three-point difference in the average versus a seven-point difference among all participating countries). But there was a significant disparity between immigrant and non-immigrant scores, with non-immigrants scoring on average 14 points higher than immigrants (512 versus 498). Additional details can be found on the PISA site. Sample problem-solving questions are also available to the public.
26
Here are 10 classroom management tips for schools with one-to-one laptop programs. What other tips do you have? Add your comments! - Have a plant to use your laptops instructionally on a regular basis.If you use the laptops as a part of your regular instruction, students are much less likely to engage in off-task behavior with them. - Set up a classroom site (wiki, Moodle, etc.) that students know to go to every day.Use this as your “home base” and link all other resources there. Include things like sponge or bell ringer activities. - Keep laptops on students’ desks during class (but remember that you can ask to have laptops closed when you don’t want students to use them). - Use the laptops for differentiating instruction and individual or small group activities.This is one of the most appropriate uses for laptops. It will also make your life easier if you don’t try to have the whole class doing an activity simultaneously on the laptops. - Give students a set of classroom rules to follow and include appropriate consequences for not following the rules.Remember to reinforce acceptable and responsible use issues. Make sure to include a rule about bringing the laptop charged and ready to use every day. - Use folders to organize students’ work. - Set up rules for file naming.Here is a suggestion: This will let you easily identify the assignment and student without opening the document and sort accordingly to put in folders. I like to make this a part of the grade for each assignment. - Have students keep a grid of their user names and passwords for Web 2.0 sites.Keeping track of these is one of the biggest challenges I’ve faced. Anyone have any great strategies for this? - Make students responsible for charging their laptops when they need it. - Empower your students to help solve each other’s tech problems.This is good for them and will also make your life easier. Designate selected students to be “tech squad” helpers. These students can be given special training and incentives for their participation.
26
Early Grades Become the New Front in Absenteeism Wars While many think of chronic absenteeism as a secondary school problem, research is beginning to suggest that the start of elementary school is the critical time to prevent truancy—particularly as those programs become more academic. “Early attendance is essential; This is where you really want to work on them,” said Kim Nauer, the education project director at the Center for New York City Affairs, which studies attendance issues. “By the time you get to 5th or 6th grade, you can really get a cascade effect that you can’t recover from. How much money do we spend in a school system on all of this recuperative stuff in high school—getting the kid back and reengaged—as opposed to making sure the kids don’t slip off in elementary school?” Yet statistics show that rates of absenteeism in kindergarten and 1st grade can rival those in high school. An average of one in 10 students younger than grade 3 nationwide is considered chronically absent, defined as missing 10 percent or more of school. That’s about 18 days in a normal 180-day year, according to the San Francisco-based Attendance Counts and the Baltimore-based Annie E. Casey Foundation and others. According to the Casey foundation, which has stepped up its focus on attendance in recent years, the problem is particularly acute among students from low-income families. The foundation reports that, in 2009, more than one in five poor kindergartners was chronically absent, compared to only 8 percent of young students living above the poverty line. Among homeless students, absenteeism can be even more common. Reducing absenteeism is important, experts said, because studies link it to an increased likelihood of poor academic performance, disengagement from school and behavior problems. Moreover, research by the National Center for Children in Poverty shows that the same risk factors that make students more likely to become chronically absent, such as poverty-related mobility or an unstable home life, only serve to intensify the problems caused by missing school. Ms. Nauer, whose report on early absenteeism prompted New York City Mayor Michael Bloomberg to launch attendance-turnaround pilots at 25 schools this year, said educators there were surprised to learn that 10 of the 25 pilot sites are elementary schools. “It’s so much a part of the average experience of the schools that we don’t notice it,” Ms. Nauer said. Teachers would “have about five or six kids gone on any given day, and they realized how absolutely disruptive that was, but they hadn’t really been thinking about it. Nobody even realized the little guys were missing so much school.” Not a Priority Hedy N. Chang, an early-absenteeism researcher and the director of Attendance Counts, said high kindergarten absences are the norm nationwide, but tend to get less attention from educators and policymakers than secondary school truancy. Preschool and kindergarten absenteeism may be more prevalent, Ms. Chang said, because in many states kindergarten attendance is not mandatory and because parents and community members may not understand how early-learning curriculum has changed in recent years. “Kindergarten as an academic resource is a relatively new experience,” Ms. Chang said. “Parents may think of their own experience, but kindergartners today are learning to read.” Yet missing school early, when students are learning the most basic skills, can hamstring students in later grades and contribute to poor attendance throughout their academic careers. The National Center for Children in Poverty found in 2008 that on average, students who missed 10 percent or more of school in kindergarten scored significantly lower in reading, math and general knowledge tests at the end of 1st grade than did students who missed 3 percent or fewer days. Moreover, the researchers found chronic absenteeism in kindergarten predicted continuing absences in later grades. A study released this year by the Baltimore Education Research Consortium showed that high school dropouts show steadily increasing chronic absenteeism for years before they actually leave school. Educators agree that improving attendance in the early grades requires a different approach than secondary school truancy interventions, because, as Ms. Chang put it, “Most 5-, 6-, 7-year-olds, they’re not home playing hooky.” Since 2008, when the first attendance data by the Center for New York City Affairs suggested more than 90,000 of the city’s elementary school students miss more than a month of school, the Children’s Aid Society of New York has conducted school-by-school risk assessments and intervention plans to improve attendance at the 22 community-model schools with which it works. “It’s so easy to jump to a conclusion about why a child or a group of children are absent—‘Oh, it’s the parents or it’s the students’—but we have found in our research that it’s really important to do some digging and find out what is going on,” said Katherine Eckstein, the public policy director for the Children’s Aid Society. For example, Children’s Aid attendance monitors found young children’s absences could trigger a ripple effect in families. If younger siblings had to stay home with a flu, asthma, or other ailment, frequently older siblings missed school, too, in order to watch them while the parents worked. In the Bronx, P.S. 61 Francisco Oller School created child care and health partnerships in which staff members interview the families of students who are absent frequently. In exchange for parents ensuring all their children get to school every day on time, an outreach coordinator will arrange and escort children to doctors’ appointments at the nearby Bronx Family Center clinic, or provide school-based in- and after-school care, according to Octavia Ford, P.S.61 site coordinator. The school is working now to provide mental health and social service screenings for students anxious about coming to school. Ms. Eckstein said her group has found that in neighborhoods with high asthma rates, schools with on-campus health centers have higher attendance than schools without those services. “Children and families have relationships with the schools, obviously, but they also may have relationships with the Boys and Girls Club across the street or the health clinic, and you need to leverage all of those relationships,” she said. Similarly, Providence, R.I., schools found that more than 16 percent of urban students in kindergarten through grade 3 missed 18 days of school or more. After extensive interviews with parents, administrators determined that parents’ overnight work schedules contributed heavily to the problem, as returning parents fell asleep before bringing their children to school. In response, Robert L. Bailey, IV Elementary School created an early morning child care starting at 7 a.m., to allow parents to drop off students at the end of their shift. This sort of parent education and family support can not only help parents and young students develop better attendance habits, but can also get disconnected families more involved in school generally, according to M. Jane Sundius, the director of education and youth development at the Open Society Institute in Baltimore, which studies absenteeism. “Even parents who don’t feel they can add much to their child’s education, if they are lauded for getting their kids to school each day … there’s so much possibility there,” Ms. Sundius said. Vol. 30, Issue 08 Get more stories and free e-newsletters! - Allegheny Valley School District, PA - Middle School Math Teacher - The International Educator, Major Cities Worldwide - High School Physics Teacher - The International Eduator (TIE), Major cities worldwide, In, United Kingdom - Enrollment Ombudsman - Hempstead Union Free School District, Long Island, NY - ISS iFair - International Schools Services, US
26
Technology @ Ashlawn Instructional Technology Resources At Ashlawn, technology resources provide access to a variety of computer and peripheral tools. Ashlawn teachers participated in the Library of Congress Program and earned LCD Projectors for the school. Combined with their efforts and the school technology budget, every classroom currently has an LCD Projector and SmartBoard. Many of the specialists are also equiped with Smartboards and ocmputers availableto work with small groups. Additional technology tools are available to students and teachers including digital cameras in each classroom, digital video cameras, scanners, and networked black and color printers. Starting school year 2014-15, Ashlawn 2nd grade students will be participating in the digital learning initiative project which provides an iPad for each student. Activities will be designed for the 2nd grade teacher to guide students through the project supporting reading, writing, math and digital literacy. As these students progress to 3rd grade they will take their iPads with them and incoming 2nd grade students will get their own device. By school year 2017-18 all students 2-5 will each have a device devoted to their use for interaction with content being taught by their teachers. If you have any questions about the technology program at Ashlawn please contact Larry Fallon, Instructional Technology Coordinator for Ashlawn.
26
Purpose: To help faculty members appreciate the gulf between their expert knowledge and their students’ novice understandings so they can create positive teaching and learning situations. Bransford, Brown, and Cocking (2000) have identified some important characteristics of experts that have implications for teaching and learning: “1. Experts notice features and meaningful patterns of information that are not noticed by novices. 2. Experts have acquired a great deal of content knowledge that is organized in ways that reflect a deep understanding of their subject matter. 3. Experts’ knowledge cannot be reduced to sets of isolated facts or propositions but, instead reflects contexts of applicability: that is, the knowledge is ‘conditionalized’ on a set of circumstances. 4. Experts are able to flexibly retrieve important aspects of their knowledge with little attentional effort. 5. Though experts know their disciplines thoroughly, this does not guarantee that they are able to teach others. 6. Experts have varying levels of flexibility in their approach to new situations,” p. 31. The teaching implications are numerous. For example, when students must acquire content knowledge in order to later become experts themselves, repetition must be built into the learning process, preferably through as many modalities (text, diagrams, animations, films, problem-solving, testing, etc.) as possible. Group work can be helpful because often students who are more knowledgeable than others can “translate” difficult material in ways that make more sense to other students than the professor’s expert explanations. Bransford, J. D., Brown, A. L., & Cocking, R. R. (Eds). (2000). How people learn: Brain, mind, experience, and school. Commission on Behavioral and Social Sciences and Education National Research Council. Washington, DC: National Academy Press. To follow up on any of these ideas, please contact me at [email protected]. This Weekly Teaching Note was adapted from a contribution to the Teaching and Learning Writing Consortium sponsored by Western Kentucky University. Teaching and Learning Center University of Texas at San Antonio Improving Student Learning with (Almost) No Grading Improving Student Achievement with Effective Learning Techniques NYIT Faculty Talk about Teaching: Focus on International Students, Part 2 Characteristics of Effective Feedback Using the PEAR Approach to Develop Stronger Discussion Questions
26
This fall, Congress will evaluate and potentially reauthorize the No Child Left Behind (NCLB) Act of 2001. This will be tantamount to grading education. It is important for all Americans to remember that comprehensive reform is necessary to restore our international educational edge. This reauthorization cannot deteriorate to another referendum on President Bush’s popularity. We should not allow the fate of this landmark legislation to be guided by partisan political agendas. The real question for Washington is whether our national leaders will have the courage to combine all the resources available to the executive branch, the legislative branch, and the judicial branch of our government to solve our problems. Young Americans cannot read well. Young Americans are falling behind international students in advanced scientific studies. We lost our competitive edge. Something dramatic must be done! But why start over? A key question that Congress must debate concerning in the NCLB is whether to continue increasing the federal government's authority over education or to turn the control of American schools back to local communities and their citizens. I believe that there must be a savvy use of the following elements to improve our educational system: 1. powerful public schools 2. competitive charter schools 3. voucher programs where appropriate 4. world-class private education 5. teacher accountability NCLB increased federal authority by giving Congress and the U.S. Department of Education new powers to set policies governing America's public schools. The Heritage Foundation (among other groups) cites that one of the unintended consequences of this legislation is the weakening of state testing and “academic transparency.” Despite the fact the NCLB only represented 8.5% of the total funding for public education, some constituencies were accused of reaching for the dollars – while compromising effective educational processes. Some states lowered standards, others changed how tests were evaluated, and many regions attempted to keep parents from understanding what their children were actually learning. Some groups have dubbed these changes as a “race towards the bottom.” As states respond to the pressure of NCLB testing by lowering state standards, parents, citizens, and policymakers are denied basic information about student performance in America's schools. The loss of academic transparency will hinder parents from knowing whether or not their children are learning and will prevent policymakers from judging how well public schools are performing. Bishop Harry Jackson is chairman of the High Impact Leadership Coalition and senior pastor of Hope Christian Church in Beltsville, MD, and co-authored, Personal Faith, Public Policy [FrontLine; March 2008] with Tony Perkins, president of the Family Research Council. Not The Onion: 'The Gov't Employees Can't Watch Porn At Work' Legislation Passes Oversight Committee | Leah Barkoukis Latest: Germanwings Co-Pilot Suffered From "Illness," Ripped Up "Sick Notes" Day of Crash | Daniel Doherty
26
Want to earn more money? You might need more education than a high school diploma or GED. Many higher-paying jobs require more schooling. As you weigh the pros and cons of going to school, think about how education pays. People with college degrees or formal job training often make more money than those without a degree. They are also more likely to keep their jobs or quickly find a new one. |Unemployment Rate*||Education Level||Typical Weekly Earnings**| |11.0%||Didn't Finish High School||$472| |7.5%||GED/High School Graduate||$651| |7.0%||Some College, No Degree||$727| Source: U.S. Bureau of Labor Statistics, 2013 wage data *Persons are classified as unemployed if they do not have a job, have actively looked for work in the prior 4 weeks, and are currently available for work. Persons who were not working and were waiting to be recalled to a job from which they had been temporarily laid off are also included as unemployed. ** Typical weekly earnings shown are the median wage. The median is the middle wage when listing all of the wages from low to high. The earnings are for year-round, full-time employed workers age 25 and older.
26
PISA : what makes the difference? The huge difference in the level and variance of student performance in the 2000 PISA study between Finland and Germany motivates this paper. It analyses why Finnish students performed so much better by estimating educational production functions for both countries. The difference in the reading proficiency scores is assigned to different effects, using Oaxaca-Blinder and Juhn-Murphy-Pierce decomposition techniques. The analysis shows that German students have on average a more favorable background except for the lowest deciles, but experience much lower returns to these background characteristics in terms of test scores than Finnish students. The results imply that early streaming in Germany penalizes students in lower school types and leads to a greater inequality of educational achievement. It remains unclear, however, if this can be attributed to the effect of school types per se or student background and innate ability that determine the allocation process of students into school types. Overall, the variation in test scores can be explained much better by the observable characteristics in Germany than in Finland. |Date of creation:||08 Mar 2004| |Date of revision:| |Contact details of provider:|| Postal: | Phone: +49 7531 88 2314 Web page: http://www.uni-konstanz.de/forschergruppewiwi More information through EDIRC Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.: - Bishop, John H. & Woessmann, Ludger, 2002. "Institutional Effects in a Simple Model of Educational Production," IZA Discussion Papers 484, Institute for the Study of Labor (IZA). - John Bishop & Ludger Wossmann, 2004. "Institutional Effects in a Simple Model of Educational Production," Education Economics, Taylor & Francis Journals, vol. 12(1), pages 17-38. - Bishop, John H. & Wößmann, Ludger, 2004. "Institutional effects in a simple model of educational production," Munich Reprints in Economics 20279, University of Munich, Department of Economics. - Bishop, John H. & Ludger Woessmann, 2002. "Institutional Effects in a Simple Model of Educational Production," Royal Economic Society Annual Conference 2002 29, Royal Economic Society. - John H. Bishop & Ludger Wößmann, 2001. "Institutional Effects in a Simple Model of Educational Production," Kiel Working Papers 1085, Kiel Institute for the World Economy. - Alan S. Blinder, 1973. "Wage Discrimination: Reduced Form and Structural Estimates," Journal of Human Resources, University of Wisconsin Press, vol. 8(4), pages 436-455. - Ronald Oaxaca, 1971. "Male-Female Wage Differentials in Urban Labor Markets," 396, Princeton University, Department of Economics, Industrial Relations Section.. - Oaxaca, Ronald, 1973. "Male-Female Wage Differentials in Urban Labor Markets," International Economic Review, Department of Economics, University of Pennsylvania and Osaka University Institute of Social and Economic Research Association, vol. 14(3), pages 693-709, October. - Eric A. Hanushek & Javier A. Luque, 2002. "Efficiency and Equity in Schools around the World," NBER Working Papers 8949, National Bureau of Economic Research, Inc. - Lauer, Charlotte, 2000. "Gender wage gap in West Germany: how far do gender differences in human capital matter?," ZEW Discussion Papers 00-07, ZEW - Zentrum für Europäische Wirtschaftsforschung / Center for European Economic Research. - West, Martin R. & Woessmann, Ludger, 2003. "Which School Systems Sort Weaker Students into Smaller Classes? International Evidence," IZA Discussion Papers 744, Institute for the Study of Labor (IZA). - West, Martin R. & Woessmann, Ludger, 2006. "Which school systems sort weaker students into smaller classes? International evidence," European Journal of Political Economy, Elsevier, vol. 22(4), pages 944-968, December. - West, Martin R. & Wößmann, Ludger, 2006. "Which school systems sort weaker students into smaller classes? International evidence," Munich Reprints in Economics 19694, University of Munich, Department of Economics. - Martin R. West & Ludger Woessmann, 2003. "Which School Systems Sort Weaker Students into Smaller Classes? International Evidence," CESifo Working Paper Series 1054, CESifo Group Munich. - Martin R. West & Ludger Wößmann, 2003. "Which School Systems Sort Weaker Students into Smaller Classes? International Evidence," Kiel Working Papers 1145, Kiel Institute for the World Economy. - Ben Jann, 2005. "Standard Errors for the Blinder-Oaxaca Decomposition," German Stata Users' Group Meetings 2005 03, Stata Users Group. - Petra E. Todd & Kenneth I. Wolpin, 2003. "On The Specification and Estimation of The Production Function for Cognitive Achievement," Economic Journal, Royal Economic Society, vol. 113(485), pages F3-F33, February. - Juhn, Chinhui & Murphy, Kevin M & Pierce, Brooks, 1993. "Wage Inequality and the Rise in Returns to Skill," Journal of Political Economy, University of Chicago Press, vol. 101(3), pages 410-42, June. - Blau, Francine D & Kahn, Lawrence M, 1992. "The Gender Earnings Gap: Learning from International Comparisons," American Economic Review, American Economic Association, vol. 82(2), pages 533-38, May. When requesting a correction, please mention this item's handle: RePEc:knz:hetero:0407. See general information about how to correct material in RePEc. For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Gerald Eisenkopf) If references are entirely missing, you can add them using this form.
26
Get free worksheets plus activities, articles, and science projects. John is playing cards, and he needs your help! Help him ID the right card by completing the subtraction problems, then coloring it in. Already a member? Sign In Members receive Education.com emails. You can change email preferences in account settings.
27
Writing an essay or research paper can seem like a daunting task, but following a few basic guidelines can help you improve your writing and possibly your grades. It's important to understand the difference between thesis statements and main ideas to make your paper clear and concise. Your paper should contain main ideas in each paragraph, but only one thesis statement. Research is the key to solid main ideas and strong thesis statements, so make sure you study your topic before you sit down to write. Identifying Main Ideas Before you start writing, it's a good idea to practice identifying main ideas as you read. A topic is the overarching idea or subject of a paper, but a main idea is a "key concept" within a paragraph. It is simply the focal point of that paragraph. If you're reading a textbook about mammals, mammals is the topic. Each paragraph should center on a main idea pertaining to mammals, such as their spinal functions or eating habits. All main ideas should relate to the overall topic of the paper. Writing Main Ideas When writing, make sure every paragraph of your paper clearly expresses a main idea at the beginning, and possibly again at the end. The first main idea is mandatory and gives your reader a clear idea of what that paragraph discusses. The main idea at the end is optional and can sum up the paragraph, provide a transition to the next paragraph, or both. For example, you may write a paragraph about aquatic mammals with a sentence at the end about how they compare to land mammals, because that comparison is the main idea of the next paragraph. Identifying Thesis Statements Every essay or research paper should have only one thesis statement. It is usually found near the end of the opening paragraph. This statement tells the reader the direction of the paper and how you plan to interpret the information. It can answer a question, make an argument or explain a problem. A thesis statement should be very specific and clearly capture the author's position on the topic; for example, "Aquatic mammals have more complex respiratory systems than land mammals." Now the reader knows that the author plans to present evidence to support this statement. Writing Thesis Statements You should always thoroughly research your topic and then think about it before writing a thesis statement. Pick one issue or aspect of your topic to focus on because thesis statements with multiple ideas are often weak and messy. You want to express a strong opinion that you can support with your research. For example, don't say, "In my opinion, aquatic mammals have more complicated respiratory systems." Just say, "Aquatic mammals have more complicated respiratory systems," then back up that statement throughout your paper. Style Your World With Color See how the colors in your closet help determine your mood.View Article Barack Obama's signature color may bring presidential power to your wardrobe.View Article Explore a range of deep greens with the year's "it" colors.View Article Create balance and growth throughout your wardrobe.View Article - Hemera Technologies/AbleStock.com/Getty Images
27
Dissertation & Research Paper Abstracts: Tips from 5 Pros A dissertation abstract is one of the most misunderstood, highly overlooked, difficult requirements for a dissertation writer to conquer. Yet, in reality, a dissertation abstract is relatively easy to write. College students read dissertation abstracts regularly without realizing that they are doing so. As part of the research for many of the articles that a learner will be required to write in college, that student will peruse dozens, maybe hundreds, of abstracts, including dissertation abstracts. Reading these abstracts thoroughly will give students a good idea about what purpose dissertation abstracts serve and how to write them. A dissertation abstract serves as a short summary of a college report. Although they appear at the beginning of an academic article, dissertation abstracts are usually the last part of the document that is written. This is because it is nearly impossible to write a summary of a document that has not yet been written. The dissertation abstract is rarely more than one or two paragraphs of text which summarizes the results. It generally also includes the thesis. Dissertation abstracts are found in journals, journal article and dissertation listings, and they are sometimes included in a job applicant's curriculum vitae or resume. The dissertation abstract serves as a brief summary of the report which helps readers and researchers determine if the entire paper is something they need, or want, to read. Although some college report abstracts can be up to two pages long, the shorter the better. When preparing a dissertation for a journal or for use on a CV, it is advisable to limit the abstract to no more than 500 words. Dissertation abstracts should also make use of keywords. Keywords are words or phrases that a student or researcher might use to search within a database for a dissertation pertinent to their own studies. For instance, if a university report is about the use of drugs on college campuses, the report abstract should make use of keywords such as "drugs," "college," "drug use," "drugs on campus," and other words and phrases which pertain to the topic. One way a student can make sure that he or she uses the right keywords and keyword phrases is to think about what words and phrases he or she might use to find a similar reference project. Although, it is difficult to summarize what is often a dissertation of a hundred or more pages into a few short paragraphs, it is important to write a thorough dissertation abstract that is truly representative of the report study. Since dissertation abstracts serve so many important functions, there is no doubt students should pay particular attention to creating the best abstract possible. Viewpoint of Author #2 A research paper abstract is often an integral part of a writing assignment. Not all students will need to write abstracts; however, when a professor assigns a research paper abstract, students need to know how to write one succinctly and effectively. A research paper abstract is basically a recap of the content of a college report. An abstract is usually one paragraph (depending on the size of the report) that summarizes what the article is about in very clear terms. Many people write abstracts in order to help readers to decide whether or not they want to read the document. All research paper abstracts should be written only after the student has completed the paper and drawn his or her own conclusions. The abstract does not necessarily recap the conclusion, though. Instead, the abstract tells readers what they will be reading, but not necessarily what they will learn from the document. In many cases, the report abstract will appear before the introduction of the report and after the cover page. However, some professors prefer that students put the abstract on the cover of the document. For this reason, students need to be sure that they read the project assignment requirements thoroughly before submitting their final work. In order to write a research paper abstract, learners can review their document outline to get a better idea of the key points that he or she expressed. These key points should be addressed in the abstract, but they do not necessarily have to be explained. A research paper abstract is very similar to other parts of an academic article, such as an introduction. An introduction also tells readers more about what they are going to learn from the document. However, the difference between introductions and research paper abstracts is that introductions provide background information and introduce the topic of the document. Abstracts give a general overview of the report, but may not include any background information. Viewpoint of Author #3 A thesis abstract is a brief but comprehensive summary of an undergraduate or graduate thesis—a long and original investigation-based document. Thesis abstracts are presented at the beginning of theses to provide the reader with an overview of the document's contents. Thesis abstracts are different from abstracts a student may have written for other texts, such as conference papers and journal articles, because thesis abstracts are typically required to be no more than 350 words. This is the maximum abstract word limit of UMI Publishing, an international thesis and dissertation publishing database to which most graduate theses and dissertations are sent. Most universities require their graduate students to submit completed theses to UMI; therefore, must universities require that their graduate students cap their abstracts at 350 words. Writing a comprehensive summary of a large research text in 350 words or less can be a great challenge; thus, the primary hurdle of thesis abstract writing is concision. To begin, the student should write what he or she considers a thorough summary. This includes discussing the research question, providing a reasonably detailed outline of the research methodology, and offering a thorough report of the findings and implications of the project. In essence, all of the major components of the thesis project should be discussed in brief. Students should remember that their audience will be primarily researchers who are searching the UMI database for information regarding the student's thesis topic, and should strive to include all of the information of interest to a researcher in the abstract. Once the student has composed the first draft of the thesis abstract, he or she must begin eliminating excess information and words. The student should first attempt to eliminate excess information—anything that is repeated or unnecessary to the understanding of the thesis project. It is likely that the learner will be able to eliminate a few sentences by critically determining what material is absolutely indispensable to the full comprehension of the student's thesis and discarding what is not. The abstract must next be pared down by eliminating excess language. This requires the student to rewrite each sentence in the most direct way possible. Adjectives that are not entirely necessary for the understanding of the content should be deleted and long, complex sentences should be recast in simple structures. The student should do this until the abstract has been narrowed to 350 words. If the student finds this process exceptionally difficult, he or she should consult a friend, peer, or teacher, as it is often easier for third-party revisers to cut down texts than for writers to pare down their own. Viewpoint of Author #4 Dissertation Abstracts International, also known as DAI, is an electronic database of graduate theses and dissertations. Most North American institutions of higher learning, and some institutions abroad, require their graduate students to submit a copy of their article and dissertation to DAI. DAI then publishes an abstract of the thesis or dissertation in its database so that researchers may find and order a copy of a thesis or project that may be relevant to their area of study. Dissertation Abstracts International is a database of theses and dissertations only. It should not be confused with more comprehensive databases containing the works of scholarly, peer-reviewed journals. Furthermore, though dissertations and theses are valid pieces of scholarship because they have been written under the advisement of a committee of advanced professors, they may not always carry the same credibility as research studies published in peer-reviewed journals. Different from many full-text databases, Dissertation Abstracts International often does not provide searchers of its database with immediate access to all theses and dissertations listed. Instead, DAI will provide an abstract of all listed theses or dissertations so that researchers may assess whether or not the full document will be useful to them. The full thesis or report itself must typically be ordered. Because the abstract is frequently the only available indicator of the scope and topic of a thesis or dissertation listed on the Dissertation Abstracts International database, doctoral students who are writing a thesis or dissertation are encouraged to write a comprehensive abstract of their documents. Dissertation Abstracts International mandates that these abstracts be less than 350 words; therefore, the abstract writer must be concise while attempting to present a thorough understanding of his or her work. The abstract should offer a summary of each section of the thesis or dissertation, taking care to include information such as the type of study the thesis or dissertation is reporting on, the subject of the study, the participants used, the instruments and data analysis tools implemented, and the study's findings. The abstract writer should also include important terms or ideas addressed in the body of their article or dissertation to guide researchers looking for texts on those terms or ideas to their particular text. Dissertation Abstracts International can be accessed via the online resources of most universities and educational institutions. Dissertation and writers may visit the site to determine more DAI guidelines and to view samples of abstracts. Viewpoint of Author #5 A research paper abstract is a scholarly, academic writing that requires students to gather, analyze, and synthesize information about an existing research paper. It is basically a concise summary of a lengthy document. In order to write research paper abstracts, students must carefully follow steps that lead to the compilation of accurate compositions. First of all, the student must select a topic and formulate a specific research question. Then, it is the student's responsibility to gather facts that support his or her answer(s). By finding current and relevant sources or materials such as books, magazines, encyclopedias, and journals, the pupil can begin to take notes based on information found in references. It is very important to gather both factual evidence and opinions from reliable sources. The next step is to outline the research paper abstract. The best research paper abstracts are generated from well-developed outlines. Students must carefully review their subject, purpose for writing, and the kind of materials found during their research activities. By sorting through notes, learners can categorize the sections of the research paper and provide supporting details in the form of examples, reasons, and ideas for each section. This outlining step is the key to arranging the research paper abstract and writing a good first draft. After writing the first draft, the next step is to polish and proofread the research paper abstract. Students who edit their work and check for proper spelling, phrasing, and sentence construction often find that their final drafts are exemplary. The abstract is typically placed in the first section of the paper and sums up the paper's major points in 100-350 words. It expresses the main purpose and argument of the research paper. A good abstract is unified, coherent, and concise. It offers logical connections between the writer's reflections and information. Therefore, research paper abstracts provide the reader with the research topic, the research problem, the main findings, and the main conclusions. The main sections of the actual paper include the abstract, an introduction, body, conclusion, and a references page. Students must carefully adhere to the citation and referencing guidelines provided by the instructor. Related Essays for Sale Tutorial Video on How to Write . . . Successfully
27
Mystery writers need to respect the intelligence of their readers. Mystery fans have a natural interest in solving puzzles. Don’t spoil their enjoyment of sleuthing by explaining everything they should be allowed to discover. Here are 5 ways to show you respect your readers’ intelligence. - Don’t insert yourself into the story by explaining the meaning of what’s happening. Allow the reader to respond to the characters and action on their own. - Don’t overdue the use of adverbs in an effort to describe actions. Verbs should carry the weight of the description. Use the most vivid verbs you can find. - Don’t overdue adjectives an effort to describe places, characters and feelings. Add just enough description to provide a sense of the whole character. - Don’t tell your readers how they should feel. Provoke emotion through character reactions and interior dialogue. - Don’t reveal your own personal biases through your writing. Better to write factual descriptions rather than explicit emotional direction. Better to remain invisible. Let your readers’ own emotional and psychological backstories and personal, intimate temperaments dictate how they respond to your characters and events. Have faith in the intelligence of your mystery readers. Their personal input based on their own life experiences will increase their enjoyment of your mystery novel.
27
Get free worksheets plus activities, articles, and science projects. Mr. Scuba Diver is lost among all these subtraction problems. Can your child help him find his way? As he gives Mr. Diver a hand, he'll practice two-digit subtraction with borrowing for an quick mental math workout. Check out the rest of the printables in this series for more ocean-themed math worksheets. Already a member? Sign In Members receive Education.com emails. You can change email preferences in account settings.
27
Get free worksheets plus activities, articles, and science projects. Help your third grader understand fraction basics with this worksheet. Each shape is divided into sections, and some of them are shaded. Can she figure out what fraction the shaded portions represent? Already a member? Sign In Members receive Education.com emails. You can change email preferences in account settings.
27
Story Times are designed to foster an early and ongoing love of books, libraries, and learning. What happens at Story Time? Infant Story Time (Ages Newborn to 12 months) Infants make connections between books and language. Parents and caregivers learn how to use Early Literacy practices at home. Find Infant Story Times World Language Story Time Kaleidoscope Play & Learn is an organized play group for newborns to age 5 and their caregivers. Have fun learning while we play, sing songs, read and create art! Find Kaleidoscope Play & Learn programs at your library.
27
Get free worksheets plus activities, articles, and science projects. Here is a fun activity to help beginning geography students get to know their states. Color the state flag of Ohio, and read a fun fact about the symbols on the flag as you go. Already a member? Sign In Members receive Education.com emails. You can change email preferences in account settings.
27
Daily Word Ladders: Grades 4-6: 100 Reproducible Word Study Lessons That Help Kids Boost Reading, Vocabulary, Spelling & Phonics Skills--Independently! Daily Vocabulary Boosters: Quick and Fun Daily Activities That Teach 180 Must-Know Words to Strengthen Students' Reading and Writing Skills (Teaching Resources) Daily Word Ladders: 80+ Word Study Activities That Target Key Phonics Skills to Boost Young Learners' Reading, Writing & Spelling Confidence Daily Word Ladders: Grades 1-2: 150+ Reproducible Word Study Lessons That Help Kids Boost Reading, Vocabulary, Spelling and Phonics Skills! Oxford Picture Dictionary High Beginning Workbook: Vocabulary reinforcement activity book with 4 audio CDs Oxford Picture Dictionary Low Intermediate Workbook: Vocabulary reinforcement Activity Book with Audio CDs English Language Learners: Vocabulary Building Games & Activities, Grades PK - 3: Songs, Storytelling, Rhymes, Chants, Picture Books, Games, and ... Purposeful Communication in Young Children Oxford Picture Dictionary Low Beginning Workbook: Vocabulary reinforcement activity book with 3 audio CDs 50 Conversation Classes: 50 sets of conversation cards with an accompanying activity sheet containing vocabulary, idioms and grammar. Vocabulary for the Gifted Student Grade 2 (For the Gifted Student): Challenging Activities for the Advanced Learner
27
Peter Pig's Money Counter Description: With the help of wise Peter Pig, kids practice sorting and counting coins to earn money for their “banks”—all the while learning fun facts about U.S. currency. They learn to recognize and sort coins based on value, adding up multiple coins and more. The game was developed by Visa Inc. for elementary school aged kids 4-7. Category: Math Games Note: This game requires Adobe Flash Player. If game does not load, try installing the newest Flash Player. This game takes a few seconds to load. 1. Dune Buggy 6. Egg Hunt 7. Sewer Run 10. Wild Wild Taxi Play games, win ourWorld money, and get clothing and accessories to create your own style.
27
Soar Into Spring: A Multiplication Activity: Coordinate Graphing with Multiplication (Multiplication x 6) Students find the answers to multiplication problems, then use their answers to reveal a hidden picture. Number of pages: 3 Subject: Math, Multiplication Grade: 2 - 3 Theme: Spring Themes Type: Learning Activities and Worksheets Author: Deborah Schecter File Size: 675 KB Open to Print This Printable is FREE to access. Click here to add a Printables subscription to your account This resource is in PDF Format. If you don't have Acrobat Reader installed, please click here to download a free version of the software. Get Your Access to Printables & Mini-Books
27
Purpose and process: a reader for writers This innovative reader focuses on writers' purposes and processes for reading and writing, and on the connections "between," reading and writing. Every chapter integrates purpose, process, and rhetorical strategies for achieving specific writing goals. Sixty-four selections by both professional and student writers illustrate these purposes. The readings address reading and writing purposes and processes, observing, remembering, investigating, explaining, evaluating, problem solving, and arguing. For those interested in improving their reading, writing and research abilities. 7 pages matching Merrill Markoe in this book Results 1-3 of 7 What people are saying - Write a review We haven't found any reviews in the usual places. Purposes and Processes Processes for Critical Reading MORTIMER ADLER How to Mark a Book 58 other sections not shown Alice Walker American annotations argue arguments asked AUDIENCE AND LANGUAGE Batman Cairn Terrier Cliffs Notes contestants culture describe Dillard's DISCUSSION AND WRITING draft effects essay euthanasia evaluation example experience explain feel film football fried chicken friends Graceland Gretel Ehrlich intended audience investigative Judith Viorst juice live look main idea Marie Winn Merrill Markoe mother Nancy Friday Negro never NORMAN ATKINS observing orange pageant paragraph parents Pauline Kael person play PREREADING JOURNAL ENTRY problem PURPOSE AND STRATEGY QUESTIONS FOR DISCUSSION QUESTIONS ON AUDIENCE QUESTIONS ON MEANING QUESTIONS ON PURPOSE readers remember Reprinted by permission revise RICHARD RODRIGUEZ Robin Lakoff roommate sentences smoking solution specific story teacher television things topic voice want a wife watching William Zinsser wine woman women words Zinsser
27
ReadWriteThink couldn't publish all of this great content without literacy experts to write and review for us. If you've got lessons plans, videos, activities, or other ideas you'd like to contribute, we'd love to hear from you. Find the latest in professional publications, learn new techniques and strategies, and find out how you can connect with other literacy professionals. Teacher Resources by Grade |1st - 2nd||3rd - 4th| |5th - 6th||7th - 8th| |9th - 10th||11th - 12th| Developing Characterization in Raymond Carver’s “A Small, Good Thing” |Grades||9 – 12| |Lesson Plan Type||Standard Lesson| |Estimated Time||Three 50-minute sessions| Grades 6 – 12 | Student Interactive | Inquiry & Analysis Students can map out the key literary elements of character, setting, conflict, and resolution as prewriting for their own fiction or as analysis of a text by another author in this secondary-level interactive. Grades 7 – 12 | Calendar Activity |  May 18 Students identify characteristics of Carver's work and compare them to other authors, as well as to literary minimalism. Students then write original poems or short stories in minimalist style. Grades K – 12 | Professional Library | Book Student engagement with community becomes the centerpiece of the book, an engagement that takes place across disciplines through projects involving history, environment, culture, and much more. Professional Library | Book Rubenstein offers specific, classroom-tested strategies for teaching Raymond Carver's short stories and poems in the high school English classroom.
27
Smiley Sight Words™ - App Store Info DescriptionSmiley Sight Words teaches high-frequency sight words to your child! The 1,600+ sight words included with Smiley Sight Words comprises up to 85% of the text in a child's early reading materials. A child who can recognize just 8 of 10 words in a sentence can typically understand its meaning! "Sight words" often cannot be illustrated via simple pictures or sounded out according to regular phonetic decoding rules, thus they need to be learned and recognized "on sight". Smiley Sight Words is a handy way to keep track of and encourage a child's mastery of their sight words. • Optimized for iPad, iPhone, and iPod touch. • Over 1,600 common sight words comprehensively compiled from Dolch, Fry, Pinnell-Fountas, UK, and many other high-frequency word lists. • Beautifully recorded American-English pronunciations for all words • Top 1000 - 40 levels sorted by frequency • Dolch 300 - 7 levels (pre-primer, primer, grade 1-3, nouns 1-2) • Dolch 300 - 11 levels sorted by frequency • Fry 1000 - 10 levels sorted by frequency • UK 1000 - 10 levels sorted by frequency • AU Rainbow - 12 levels sorted by color-code. • More Words - Upper and lower-case alphabet, numbers 0-100, colors, shapes, animals, months, days of week, family words, common nouns • Words organized by number of letters • 10 Customizable Flash Card sets • 10 Shared Customizable Flash Card sets common to all 5 players • Copy and Paste words from sets or emails • Email export of word lists • Show and Hide individual words • Mark flash cards with smileys - try to earn 3 thumbs-up smileys for each word. • Flash cards respond to swipes, shakes, and taps • Customizable settings for up to 40 players • Player names and photos can be customized • Progress Report can be saved to Photo Library • D'Nealian, Zaner-Bloser, Times, Helvetica, Marker Felt, Chalkboard, and Noteworthy fonts • In-app Help documentation REVIEW FEEDBACK -- Please contact us first at [email protected] @Paul Personage - Version 2.0 now supports iPad natively! @Fl bobber - Version 2.0.1 increased support to 40 profiles. @AnotherNicknametoo, @Mel in Cincy, @TwinMomma35 - Our SightWords Pro app offers custom words and recordings. We hope to add that functionality in Smiley Sight Words in the future.
27
Help all students progress through reading levels with confidence The three components of text complexity include quantitative, qualitative, and reader and task measures. On the quantitative end, the Lexile Range for each grade-level collection meets or exceeds CCSS recommendations. Additionally, each library has been carefully leveled using a qualitative measure system factoring in reader variables such as vocabulary, language complexity, length of text, and theme. Teachers prompt and support students' reading by balancing the difficulty of the text with support for students reading the text. Engaging texts motivate students to read and improve comprehension skills. As teachers work with students in small groups, students are challenged to engage with texts at higher reading levels and greater text complexity. A wide range of texts in the Scholastic Guided Reading programs exposes students to a variety of text types across content areas. As students work with these texts, they build a core foundation of literacy skills essential for growing into stronger readers.
27
Get free worksheets plus activities, articles, and science projects. Even if you have a picky eater on your hands, this word scramble challenges him to know his vegetables and it boosts spelling and vocabulary skills. Let your child puzzle-out these food words! Already a member? Sign In Members receive Education.com emails. You can change email preferences in account settings.
27
Regardless of the kind of business you're in, business reports are used to explore a variety of topics, generally related to summarizing the current status, and improving profits. Reports may be short or intensely detailed, depending on the company's needs and requirements. You can learn to define your objectives and collect your data appropriately, organizing your findings into a professional business report, whatever your purpose. Part 1 of 3: Getting Started 1Define the objective and purpose of the report. Depending upon your line of work, business reports may be written on a variety of topics, but the objectives are usually assigned, not chosen. In other words, you'll usually be asked to write up a business report by a superior, or by a writing instructor in business school, and assigned a particular topic to investigate, draw conclusions about, and come up with a recommendation for a course of action. - Most business reports center around a very specific premise: increasing profits. While you may be asked to do a customer analysis, reporting on buying trends in a particular demographic, the big question is whether or not it will impact the business you do. - Think of this as the "so what?" question. If you identify something that seems important in your analysis, imagine your boss (or whoever will be reading this) asking, "So what?" or "How does this affect us?" 2Consider your audience. Who will be reading this report? Why do they care about the findings? What will they learn by reading? What do they expect to hear? These are important questions to keep in mind while you're drawing conclusions and making suggestions about a particular course of action. - If the report is for your board of directors, the report should contain more information than if it's directed toward employees who work within your division. Consider the information that is pertinent to who will be reading the report and don't include data that is unnecessary. - Your purpose and intentions of the report should be detailed at the very beginning, in the introduction. 3Identify what you need to learn. The hardest part of writing a business report isn't in the writing, it's in the forming of a conclusion and collecting the data necessary to form that conclusion. This involves a variety of skills, including data collection and market analysis. What do you–and, in the end, management–need to know to make an informed decision about the topic? - For example, if you work for Acme and need to explore whether or not increasing the production of Widget X will be profitable, you need to conduct a thorough market demand analysis and production analyses. - To analyze these topics thoroughly, you'll need to ask a variety of questions: How many Widgets will you be able to sell? At what price point will the Widget be profitable? How much will it cost to increase production? Who is buying Acme products? 4Collect data. When you identify what you need to learn, the next step is learning it. Depending on the purpose of your report, you may be required to collect all kinds of different data from a variety of sources. You might need to look at performance reports, production numbers, quality control information, attendance reports or expenses. The purpose of the data is to give concrete details and support to a particular course of action that you're going to suggest in the report. - Data may come internally, which means you'll be able to collect it quite quickly. Sales figures, for example, should be available from the sales department with a phone call, meaning you'll be able to get your data and plug it in to your report fast. - External data may also be available internally. If you've got a department that already does customer analysis data collection, borrow their figures. You don't need to conduct it on your own. This will be different for every type of business, but the writer of a business report often doesn't need to conduct first-hand research. Part 2 of 3: Writing the Report 1Introduce the topic. The introduction to your report should specifically outline the topic at hand and explain why you're exploring it. What is the objective of this report? What will your boss learn after reading it? Why is this topic important to consider? - Again, as you write, try imagining one of your superiors looming over your shoulder, asking, "So what?" or "Why do we care about this?" Answer those questions right off the bat, before you get into the data itself. 2Provide relevant data and explain it. Depending on your objective, the body of your report will consist of the wealth of specific details and data you acquired, related to that objective. You might have sales figures, customer analysis data, and a variety of other things to include, so it's important to stay organized. In the body, you'll present and, more importantly, explain the specific details. - Imagine that your boss doesn't understand the data you'd collected and explain it in as specific a way as possible. After every figure you quote, answer the question "What does this mean?" 3Break up relevant data into separate sections. A business report can't be a big flood of figures and information. Organization into separate sections is key to the success of a well-written business report. Keep sales stuff separate from customer analysis, each with its own header. These sections should form the bulk of a business report. - Organize the report into appropriate section headers, which may be read through quickly as standalone research, but also supporting the basic objective of the report together. - Since some of the sections may be dependent upon analysis or input from others, you can often work on sections separately while waiting for the analysis to be completed. 4Draw conclusions. After presenting your data, it's important to summarize the important details and point your reader toward the conclusions you draw, based on that data. So, if you've just presented us with a variety of different details and numbers to consider, what should this all be telling us about the topic? So what? - Usually, a section called "Conclusions" or "Findings" will be included toward the end of business reports, just before you make your recommendations specifically. 5Make a specific recommendation. Clearly specify what your expectations are for the future. The goals should be measurable. Perhaps, you wish that production would increase. You might set a goal for a 20 percent increase in production and set a deadline when that goal will be measured. - Goals should include specific actions, not vague statements. Write out any changes in job descriptions, schedules or expenses necessary to implement the new plan. Each statement should directly indicate how the new method will help to meet the goal set forth in the report. 6Write the executive summary. The executive summary should go on the very first page of the report, but it should be the last thing that you write. The executive report should summarize your findings and conclusions, giving a very brief overview of what someone would read, should they choose to continue reading the entire report. It's like a trailer for a movie, or an abstract in an academic paper. - The executive report gets its name because it's likely the only thing a busy executive would read. Tell your boss everything important in no more than 200-300 words here. The rest of the report can be perused the boss is more curious. Part 3 of 3: Finishing a Business Report 1Use infographics for applicable data, if necessary. In some cases, it may be helpful to include graphs or charts displaying quantitative data. Use color within the display, as it draws more attention and helps to differentiate the information. Whenever possible, use bullet points, numbers, or boxed data to help with readability. This sets your data apart from the rest of your report and helps to indicate its significance. - Generally speaking, visual figures are a great idea for business reports, because the writing and the data itself can be a little dry. It's not necessary, strictly speaking, but it can help to make a boring report a little more readable. - Use boxes on pages with a lot of text and no tables or figures. A page full of text can be tiresome for a reader, and it can be an effective way of summarizing important points on the page. 2Formalize the style and wording of the report. A report should have consistent fonts, headers and footers, and should be written in the most formal business language possible. Business reports aren't the time for cleverness or slang. Make it professional. - Generally, business writing is written in the passive voice, and it's the only writing that is usually better to write this way. - No slang should be used in business writing, but business specific jargon is usually acceptable. If it's common knowledge in your business, it shouldn't be a problem. 3Have someone peer review your report. A fresh set of eyes is often helpful. Ask a coworker to review the objective of the report, so they can evaluate whether you achieved your objective, or whether or not you might be able to do a bit more. - Be open to the feedback. Better to hear it from a coworker than from a boss. Review each comment from the peer review and re-write the report, taking comments into consideration. - Re-read the report yourself trying to use the eyes of the person for whom the report is intended. Ask yourself if the analysis in the report leads to findings that lead to recommendations. See if you can follow this series of if–then statements. 4Create a table of contents. Format the business report as formally as possible, creating a table of contents to make it easy to reference and flip through your report. Include all relevant sections, especially the executive summary and conclusions. 5Cite your sources, if necessary. Depending on what kind of research you've done, you might need to explain where you're getting your information from. The purpose of the sources page on a business report, is to provide a resource for others, should they wish to follow up on the data and look into it. 6Provide your report in the appropriate format. Print enough copies to distribute to everyone in the meeting where you will present the report, or for your boss. If there are several pages, bind them together appropriately, preferably in a plastic folder. Include all necessary attachments and additional documents for easy access.Ad We could really use your help! motor driven systems? Sources and Citations In other languages: Español: escribir un informe de negocios, Deutsch: Einen Unternehmensbericht verfassen, Italiano: Scrivere un Report Aziendale, Русский: написать бизнес отчет, Português: Escrever um Relatório de Negócios Thanks to all authors for creating a page that has been read 229,374 times.
27
By GreatSchools Staff Preschoolers find wonder in so many places, but they seem to be especially drawn to animals and their antics. Parents also reveal their favorite classics like Harold and the Purple Crayon and new takes on bedtime stories. These books are sure to keep your little ones engaged and inspired: • The Pigeon books by Mo Willems. This series about a persistent pigeon draws in young children by making the bird’s adventures (and hilarious tantrums) relatable. A fun read for both parents and preschoolers. • How Do Dinosaurs books by Jane Yolen. Yolen is one of the most prolific children’s writers of our time, with titles for kids in preschool through fifth grade. This series is a proven hit with the under-6 set. From How Do Dinosaurs Say Goodnight? to How Do Dinosaurs Clean Their Rooms? these books are sure to delight. • It's Okay to Be Different by Todd Parr. As your youngster starts noticing the world around him, he’ll begin questioning why things are different. GreatSchools parents recommend this book to help children learn about — and accept — a diversity of people and situations. • Yummy Yucky by Leslie Patricelli. Learning what is and isn’t OK to put in your mouth is an important part of growing up. Burgers? Yummy! Boogers? Yucky! Parents love Yummy Yucky for its humorous take on the subject and simple but colorful illustrations. • Harold and the Purple Crayon by Crockett Johnson. Many of us will remember imaginative Harold from our own childhoods. These books about an artistic little boy, with true-to-life illustrations, have withstood the test of time and are a must-have for every child’s collection. • Good Night Our World books, various authors. Leave the “great green room” behind and expand your child’s bedtime repertoire with these wonderful stories that take place in different cities and states around the country. Have your own book recommendations? Add them to the Parent Picks list! Sign up for our newsletter and we'll send you more insights to help you help your child succeed. Thank you! You will begin to receive newsletters from us shortly. Thanks for verifying your updated email address. Oops! That email verification link has expired. Please click the button below to receive a new one. Create an account to submit your answers. Sign in with an existing GreatSchools account or using Facebook: Your review has been posted to GreatSchools. Share with friends! Post your opinion of on Facebook. Welcome to GreatSchools! Thanks! We just sent you an email – please click on the link in the email to post your answers. Get timely updates for , including performance data and recently posted user reviews.
27
BACK TO SCHOOL FRY'S FIRST 100 SIGHT WORD SENTENCE CARDS F This is a PDF file with all of Fry's First 100 Sight Words in fun practice sheets for your students. In our district our kindergarteners are required to learn all 100...and this makes for some excellent practice. The sheets are simple and kid friendly. You can use these as morning work or you can make a literacy center with them. If you open the preview you will get 4 sheets to sample with your students (freebie). Hope you and your students like them. -Teaching by the Beach This engaging and fast-paced sight word game will have your students squealing with delight as they try to pick the friend cards for a bonus and avoid the wind cards- all while practicing Fry's first 100 words. ($) Leaf Pile- A Sight Word Game Talk About It Writing Strategy for Kids...helpful way to get beginning writers to get their ideas out on paper Fun and interactive way to learn 220 sight words! ($) This packet gives you everything you need to make writing journals for your students. These journals are perfect for Work on Writing!
27
Students are ready to learn about persuasive writing beginning in middle school (junior high). You need to know how to teach persuasive writing, which includes several elements to enable young writers to form good arguments. 1Teach your students that they already know how to persuade other people. - Ask your students how many times they have tried to get their parents to allow them to do something. - Tell them they have already learned how to speak or negotiate persuasively, so all they need to do is use similar skills for persuasive writing. - Teach the class that, to write persuasively, they need to present facts and not their own opinions. - List examples of persuasive essays and writing. These can include campaign speeches, advertising pitches or news and magazine articles. 2Present the goal of persuasive writing--to win acceptance of your position or ideas. - Show your class how to use facts, judge evidence, fact-check, state their ideas clearly and listen to others closely and critically. 3Teach your class about position statements, argumentative propositions and thesis statements. - Demonstrate how to state an opinion clearly in 1 or 2 sentences in the first paragraph. - Show your students how to define the boundaries, or scope of their argument. This is the situation specific to the argument. - Lead your students to make a debatable statement, such as, "The school administration should allow students to choose between wearing uniforms or wearing street clothes." - Remind your students to include some uncertainty, which should be proved to the reader. 4Present the Three Argumentative Appeals--Logos, Ethos and Pathos. - Explain to the class that, to write a persuasive argument, they need to support their general claims using concrete and specific data. - Talk about inductive reasoning, starting with specifics and branching out toward a generalization. - Discuss deductive reasoning, which begins with a general observation and moves toward specifics. - Follow up your instruction by teaching your class to use 2 or 3 different reasons to support their arguments. - Teach your students to make the connection between their ethics--ethos--and their arguments. They need to convince readers they are honest, well-informed and fair, which makes it easier for their readers to trust their values and intentions. - Instruct your class that, to provide a more convincing persuasive essay, they should strengthen their appeals with an emotional appeal--pathos. - Show the class how to use pathos with a narrative description that comes from personal experience. Using personal experience helps the reader understand point of view from a different perspective. 5Teach the class about the organization of their persuasive essays. - Explain that persuasive writing can be organized with the introduction, statement of the case, proposition statement, refutation, confirmation, digression and conclusion. - Continue your instruction by teaching the class that the organization of a persuasive essay can be changed as needed for specific assignments or arguments. - Present the different types of persuasive writing, which includes brochures, advertising, bumper stickers, editorials, consumer reports, contest entries, debate notes, dialogues, "how to" directions, graffiti, persuasive letters, news stories, orations, proposals, requests, sermons, telephone dialogues, proposals and undercover reports. We could really use your help! LCD monitor repair? Things You'll Need - Examples of persuasive writing - Persuasive writing assignments
27
…five simple rules for creating world-changing presentations. The next rule is: Help them see what you are saying. Rule number 4: Practice design, not decoration. Rule number 4: Practice design, not decoration. Rule number 4: Practice design, not decoration. The last rule is: Cultivate healthy relationships (with your slides and your audience) Students must be able to navigate a print rich world in a more analytical manner thanLiterate Environment Analysis ever before. For this reason,Presentation we, as educators, have theAngela Flores responsibility to create a spaceThe Beginning Reader PreK-3 EDUC- for students to engage in6706G-6 reading, writing, and thinkingApril 5, 2012 activities that stimulate self- motivated, life long learning. Now I bring you… FOUR STEPS TO CREATING A LITERATE ENVIRONMENT reads textUsing cognitive and non-cognitive assessment allows us tounderstand each child as a reader, writer, and thinker. Step 1: Cognitive Assessments Use cognitive assessment to gather information about each student’s reading development among the five pillars: phonics, phonemic awareness, fluency, comprehension, and vocabulary. This allows me to “understand the strengths and needs of each student” (Afllerbach, 2007, p. 4) and then plan whole group, small group, or individualized instruction to meet those needs.Cognitive assessments I used for my students: (Tompkins, 2010) *Dibels Oral Reading Fluency Measures (Good & Kaminski, 2002) *Spelling inventory *Vocabulary inventory (Good and Kaminski, 2002) *Diagnostic Decoding Survey (Really Great Reading, 2008), *Phonological awarenes survey (Really Great Reading, 2008) Step 2: Non-Cognitivie Assessments Use non-cognitive assessment to understand each learner on a personal level:what motivates them to read; what they like to read; what interests they have in life; etc. This information gives you insights into the identity of the reader so that you canbegin choosing text and forming instruction that stimulates literate experiences for each student and motivates them to read and write on their own.(Laureate EducationInc., 2010a)Non-Cognitive Assessments I used to get to know my learners: the information Igathered from these assessments helped me know how students and their family perceived themselves asreaders and writers, and allowed me to choose motivating and engaging text and instruction* Reading interest survey (Afflerbach, 2007)* Reading attitude survey (Afflerbach, 2007* Literacy inventory – to understand literacy development in the home Selecting2There are many factors to consider when choosing a group of content and conceptsupportive texts that will motivate and engage students in the reading and writingprocess, and that will enhance language and literacy development for all students. Teachers must take into account theimpact that text factors such asgenres, text structures, and text featureshave on the readability of a text and onthe students’ comprehension of a text(Tompkins, 2010). Then, teachers must also bear in mind elements such as text length, size of the print, visual support, sentence length, the variety of irregular and regular vocabulary words, and motivational/interest levels for students when choosing the range of books to offer them (Laureate Education, 2010b) Hartman (Laureate Education Inc., 202b) suggests using a literacy matrix to evaluate and select a range of texts to enhance learning. Linguistic Informational Narrative Semiotic Laureate Education Inc., 2010bSelecting a wide range of text according to genre, readibility, and interest willenhance learning and ensure that students are motivated and excited toread, write, and think. My TextsTara (1st Grade Emergent Reader) John (2nd Grade Beginning Reader)needed narrative and needed narrative and informationalinformational text that support text that develop listeninglistening comprehension, language comprehension, imagery, and mayand phonological awareness even spark further wonderings.development in visual and auditory Some books had varied sentenceways. These books were either length and vocabulary and supportused as read alouds or as the readers interests and literacyindependent reading choices: needs.The Tiny Seed, Carle (1987); Fawn in the Grass, Ryder (2001);A Seed is Sleepy, Hutts Aston Winter Whale, Ryder (1994); (2007); Jaguar in the Rain Forest, RyderParts of a Plant, Blevins (2003); (1996);One Child, One Seed: A South Look Who Lives in the Desert! African Counting Book, Cave Bouncing and Pouncing, (2003); Hiding and Gliding, SleepingJack’s Garden, Cole (1997) and Creeping, Bessesen (2004) Plant Explorer (www.naturegrid.org.uk/plang/index.html) includes pages that are interactive, text that is short and contains script that is simple and large, and text features such as headings, pictures, and captions (www.k12science.org/curriculum/bucketpr oj). Students discover and research a pond habitat near them and share their research with other students. The text is informational and is written so thatONLINE students in any range can interact with it on some level .TEXTS 3 Interactive OF READING (Laureate Education, Inc.,2010). Teachers work to teach students to be strategic in the way they decode words and choosestrategies to comprehend text, and to help them become metacognitive thinkers as they read.We want them to be “self-regulated readers that can navigate through text on their ownwithout prompting from the teacher” (Laureate Education Inc., 2010c). Visualization, inference, cause/effect Comprehension Sequencing, predicting, main idea Vocabulary Effective Instructional Practices for the Reading skills Interactive Perspective Fluency and Reader’s Theatre, repeated reading • Read Aloud strategies • Modeled - Think Aloud Phonics Word families, making words • Whole Group/Small group/one-on-one Blending, segmenting, Phonemic Awareness rhyming (Tompkins, 2010) PURPOSE StrategiesTeach studentsto think • Open Mindcritically, view Portraitstext from • QARmultiple • Hot Seatperspectives, • Book Talksexamine and (Tompkins,analyze who 2010)wrote the textand why, and tojudge thevalidity andbelievability oftext (LaureateEducation Inc.,2010d) PURPOSEStrategies Provide students with literacy experiences that affect them on a• Response personal/emotional Journal level (Laureate• Double-entry Education Inc., 2010f). journal• Learning logs We want students to(Tompkins, 2010) interact with text in a way that changes• Artistic leaves an impression Response for the rest of their• Multisensory lives (Probst, 1987) experiences Create a safe• Dramatic environment that is Response conducive to risk(Laureate taking and respondingEducation to text (LaureateInc., 2010e) Education Inc., 2010f) Please respond to the following questions on your response card:• What insights did you gain about literacy and literacy instruction from viewing this presentation?• How might the information presented change your literacy practices and/or your literacy interactions with students?• In what ways can I support you in the literacy development of your students or children? How might you support me in my work with students or your children?• What questions do you have? References• Afflerbach, P. (2007). Understanding and using reading assessment, K–12. Newark, DE: International Reading Association.• Bessesen, B. (2004). Look who lives in the desert! Bouncing and pouncing, hiding and gliding, sleeping and creeping. Arizona: Arizona Highways Books• Blevins, W. (2003). Parts of a plant. Mankato, MN: Coughlan Publishing.• Castek, J., Bevans-Mangelson, J., & Goldstone, B. (2006). Reading adventures online: Five ways to introduce the new literacies of the Internet through children’s literature. Reading Teacher, 59(7), 714– 728.• Carle, E. (1987). The tiny seed. New York, NY: Simon & Schuster Children’s Publishing Division.• Cave, K. (2003). One child, one seed: a south African counting book. New York, NY: Henry Holt & Company.• Cole, H. (1997) Jack’s garden. New York, NY: Greenwillow Books.• Good, R. H., & Kaminski, R. A. (2002). Dynamic Indicators of Basic Early Literacy Skills (6th ed.). Eugene, OR: Institute for the Development of Educational Achievement.• Hutts Aston, D. (2007). A seed is sleepy. San Fransisco, CA: Chronicle Books References• Honig, B. (2008). Assessing Reading Multiple Measures (2nd ed.). Novato, CA: Academic Therapy Publications, Incorporated• Laureate Education, Inc. (Executive Producer). (2010a). Getting to know your students. [Webcast]. The Beginning Reader, PreK-3, Baltimore: Author• Laureate Education, Inc. (Executive Producer). (2010b). Analyzing and selecting texts. [Webcast]. The Beginning Reader, PreK- 3, Baltimore: Author• Laureate Education, Inc. (Executive Producer). (2010c). Interactive perspectives: Strategic processing [Webcast]. The Beginning Reader, PreK-3, Baltimore: Author• Laureate Education, Inc. (Executive Producer). (2010d). Critical perspective [Webcast]. The Beginning Reader, PreK-3, Baltimore: Author• Laureate Education, Inc. (Executive Producer). (2010e)Perspective on literacy learning [Webcast]. The Beginning Reader, PreK-3, Baltimore: Author• Laureate Education, Inc. (Executive Producer). (2010f)Response perspective [Webcast]. The Beginning Reader, PreK-3, Baltimore: Author References• Laureate Education, Inc. (2010). [Course Document]. Frameworks for Literacy Instruction. The beginning reader, PreK-3. Baltimore, Md.: Author• Probst, R. E. (1987). Transactional theory in the teaching of literature. Resources in Education, 22(12).• Really Great Reading (2008). Decoding Surveys. Retrieved from http://www.reallygreatreading.com• Ryder, J. (1994). Winter Whale. New York, NY: Harper Collins• Ryder, J. (1996). Jaguar in the rain forest. New York, NY: Harper Collins• Ryder, J. (2001). Fawn in the grass. New York, NY: Harper Collins• Tompkins, G. E. (2010). Literacy for the 21st century: A balanced approach (5th ed.). Boston: Allyn & Bacon.
27
Successful Spellers - Review This Successful Spellers activity reviews spelling conventions. Your students will read, write and organize a word list using guide words. They then will complete a puzzle using the word list and challenge words. Number of pages: 3 Subject: Phonemic Awareness, Phonics, Spelling, Vocabulary, Writing Grade: 4 Type: Learning Activities and Worksheets File Size: 283 KB
27
Get free worksheets plus activities, articles, and science projects. Inspire your young engineer! Cut out the pictures of machine parts from this worksheet and let your engineer in training use them to create a paper cut-out simple machine. Already a member? Sign In Members receive Education.com emails. You can change email preferences in account settings.
27
to imagine you in action. Focus Your Essay Below are some questions to think about as you develop your application essay. Use these and other questions you identify about your own learning and leadership goals to help you develop an integrated statement. Essays that are merely a list of separate answers... Year 8 Parvana Text Response Essay Being able to write a text response essay is a key skill. So what exactly does a text response essay do and why do we write them? Imagine that you have been given the following topic: In the novel Parvana, the characters experience a number of changes. What are... Essay Analysis 1 Running Head: ESSAY ANALYSIS PAPER Essay Analysis on “Shoot an Elephant” Michael J. Charley University of Phoenix Instructor: Dr. Vanessa Holmes Course: Business Literature Essay Analysis 2 George Orwell's essay 'Shooting an Elephant' gives a great... instead these students are going into the work force without any experiences, creating an issue for our economy. In 2008, Robert T. Perry, wrote an essay which appeared on _InsideHigherEd.com_ and then again in a college book, Practical Argument, about "On 'Real Education.'" Perry proposed that the United... segregation is to deny them their "dignity and worth" as a human being (719-720). Another type of appeal to logic is more implicit. It asks readers to see into the presented facts. In quoting an elderly black woman, "My feets is tired but my soul is at rest" (727), Dr. King makes such an... The Hobbit: JRR Tolkien Essay Topic 3 J.R.R Tolkien in The Hobbit introduces to the readers, Bilbo Baggins and Thorin Oakenshield. As J.R.R Tolkien originally presents the character Bilbo Baggins, he is introducing a timid, home-loving and unadventurous character. While he introduces the dwarf Thorin... of study and contemplation. During this time he read a lot, wrote a lot, dictated a lot and meditated and annotated so many books. H was a voracious reader and the work that he wrote during this period was of infinitely greater importance than anything written by him before. He joined army for some time... this essay? In your response, explore the deeper meaning of this question. The goal is not just to complete the assignment but instead to convey a message. What do you plan to accomplish with this essay? What do you hope the reader takes away from this argument? My purpose is to get the reader to agree... death penalty with an essay, "Death and Justice", published in the New Republic in April 1985. Koch expresses his opinion on the death penalty through the use of modes such as Ethos, Logos, and Pathos in his essay. These modes are used to used to create a response on the readers, that will either support... History shows that many great leaders had one thing in common; charisma. Charasmatic leaders attract followers with charm and personality. These leader have the ability to motivate followers to do almost anything. There are many common characterstics of the charismatic leader; most characteristics involve... Arguable - A thesis statement should not be a statement of fact or an assertion with which every reader is likely to immediately agree. (Otherwise, why try to convince your readers with an argument?) Relevant - If you are responding to an assignment, the thesis should answer the question... Michael Moore’s essay “Idiot Nation” focuses on the steadily declining intelligence of America due to the insufficient education being provided, and the politicians who are more than a little to blame for it. In Moore’s writing he discusses the leaders of America who set an embarrassing example for our... district leader for the Democratic Party in 1963. Koch also held other offices in politics being elected into the US House of Representatives and being elected mayor for New York. When Koch ran for Mayor he ran to stop unnecessary spending and to reduce crime which got him reelected twice. In the essay “Death... apart. Our leaders are corrupt, our environment is being destroyed, and there are thousands of children being born each day. The three major social problems facing the American citizens in the 21st century are births to unmarried woman, being able to trust or government and or leaders, and lastly destroying... An Essay on Religion and Politics How to write an essay on religion in politics? When writing an essay on religion in politics, one should first have a definition of the two terms. Religion is a set of beliefs and practices, while politics is the process by which a group of people make decision... religious studies? Stephen Prothero’s article convinces its readers otherwise. Prothero effectively expresses the need to spur education in religion with uses of numerous types of rhetorical devices, such as inducing the feeling of discomfort in the readers through examples of religious ignorance in America... has affected many Third World countries including, India, South America, and Africa. Two authors, Marie Javdani and Chitra Divakruni, have written essays concerning the effects of globalization which include problems with legislation and the effects of America’s actions. According to the Indian government... In his essay, “Women’s Brains”, Stephen Jay Gould discusses the incorrect and often biased research of women’s intelligence based on data written by craniometrometer Paul Broca. While Gould does not come out and blatantly say it, I believe that he is using this essay to appeal to a more open minded individual... Is there a Chinese Way of War? any English-language books have appeared during the past 15 years on the subjects of warfare, strategy, and violence in China. Before the 1990s, volumes published on such subjects, while certainly not unknown, were few and far between... Michael Moore’s essay “Idiot Nation” focuses on the steadily declining intelligence of America due to the insufficient education being provided, and the politicians who are more than a little to blame for it. In Moore’s writing he discusses the leaders of America who set an embarrassing example for... In his essay, "Inside the Bunker", John Sack successfully portrays his view of the Holocaust deniers and his opinions on the consequences and causes of important issues such as hate, persecution, and denial. Because Sack is able to minimize the distance between himself and the implied reader, he is able... Narrative Essays: To Tell a Story There are four types of essays: Exposition - gives information about various topics to the reader. Description - describes in detail characteristics and traits. Argument - convinces the reader by demonstrating the truth or falsity of a topic. audience. Each author also developed and used a different array of techniques to influence their readers. Reading two very un-similar writings from two very un-similar authors, will likely attract different readers from all over the spectrum. These two authors have un-similar ways to make their argument and... How to write an analytical essay Writing is a powerful force that can manipulate one’s mind and ideas profoundly. People often underestimate how ascendant and potent writing is because of the dominance of modern technology and innovations nowadays. Throughout the past, writing was one of... wide range of tones, he promotes vivid imagery through his entire essay. Martin Luther King defended what he thought was right and used imagery to show that segregation was unconstitutional. In 1963, police arrested civil rights leader Dr. Martin Luther King and jailed him for leading a boycott against... Gender Differences in Communication Have you ever been under someone and you wondered how they became a leader. I have been under people that had no clue on how to be an effective leader and do not think they really cared. I do think that male and female have differences in communication. They have... A3. Effective Leader Educators debate extending high school to five years because of increasing demands on students from employers and colleges to participate in extracurricular activities and community service in addition to having high grades. Some educators support extending high school to five years... wrote about St. Thomas Aquinas and he said ''that an unjust law is a human law that is not rooted in eternal law and natural law.'' (pg 159 - 160). The essay states that any law that raises human personality is just and if any law degrades human personality is unjust (pg 159). An example of unjust law is... How is an essay structured? In order for your essay to be convincing and make sense, it needs to be presented inside a well structured piece of writing. How do you do this within the framework of an essay's general structure ofIntroduction, Body, Conclusion? Firstly, you need to be clear about what... Screen-reader users, click here to turn off Google Instant. About 2,810 results (0.38 seconds) SBS Transit Ltd.: Company Profile and SWOT Analysis - new ...
27
Get free worksheets plus activities, articles, and science projects. Ready to dive into contractions? This fill-in-the-blank worksheet asks your first grader to complete each sentence with the proper contraction. She'll practice reading and spelling contractions, and work on her handwriting, too. Already a member? Sign In Members receive Education.com emails. You can change email preferences in account settings.
27
Students become word explorers as they work their way up word-building pyramids! Students read clues on each side of the pyramid and then change and rearrange letters to create words until they reach the top—where a final clue helps them uncover a mystery word. This engaging puzzle format gives students independent practice in analyzing sound-symbol relationships, decoding, building spelling skills, and broadening vocabulary to become better readers! Try reversing the activity! To do this, fill in the answers on a puzzle and mask the clues. Copy the puzzle and distribute it to students. Then work with them to come up with their own clues for all of the words!
27
Learn the fundamental characteristics of creating heros and villains. While these characters are created in Adobe Illustrator, this tutorial isn't focused on click by click software instruction. It instead set's out to teach you a fundamental process for conceptualizing and putting these characters together. We dip into character theory a bit and then put it into practice. After reading this tutorial, try making your own heroes and villains, and then put them into battle. In order to create a character we need to grasp character theory, how it applies to the characters we are creating, and then we'll put this theory into practice. First let's look at the theory and get a general feel for the characters we'll be creating. Step 1: Silhouette Some designers say that if people can tell a character just by his silhouette that this character is well designed. I agree with that. That's why I want to start this tutorial with this subject. For me the silhouette of a character is the essence of his personality, people have to feel the vibe of the character, feel if they are friendly or not. Step 2: Geometric Figures When designing cartoon characters there are basic figures which helps us define certain characteristics and personality. In this case we are working with the two basic archetypes: hero and villain. So we are going to work with two basic shapes to define our characters, which are circles and triangles: - Circle: Using rounded figures gives a feeling of peace and softness to our designs. If we apply rounded shapes we give this soft feeling to our character designs. It helps make our character visually gentle. - Triangle: As the opposite of the circle, the triangle is a very aggressive shape. We can use this shape to give our character that aggressiveness we need in a villain. Step 3: Details Details help us give our character's personality and history. We can give them some battle scars or certain items like a skull neckless. This way we can sense more about our character at first sight. We cover this more in steps 9 and 11 below. Step 4: Colors Colors on a character are very important. They also give a lot of meaning to the personality of the archetype we are working on. Normally when working with clear colors we give this feeling of peace and tranquility, like for example the colors of the sky or snow. But when working with dark colors we give the opposite feeling, like the colors of a erupting volcano, which give us the vibe of danger or alertness. In character coloring it's the same. If we want to give our character a feeling that he can be trusted, then we need to work with light colors, which give this feeling of trust. On the other hand, with villains we need them to give us the feelings of fear and mistrust so we would lean toward dark and deep colors. Creating Our Hero and Villain Let's start with creating the actual characters and I'll elaborate on the conceptual development as we go. Take a look, step by step, at how I solve each archetype. Step 5: Setting the Groundwork To start we need to know what we are going to do. For this example I'm going to go with some kind of wizard characters. I always like to think that heros are innocent and noble and that's why I'm going to give him a very young appearance. This way we know that he has a lot to learn from life and for the same reason he is pure, innocent, and noble. Contrary to the hero, The villain normally is older than the hero and maybe he had some rough times growing up. Whatever the reason is, he has more experience so we'll give him a self secure, cocky personality. Step 6: Initial Sketches As you can see my first sketch is all composed by rounded shapes, some of them are circles and some not, but the corners and other parts of the characters are rounded. As mentioned previously, by using rounded shapes we can create this feeling of softness. Giving this feeling to our character makes him more touching. For the villain we are going to use pointy shapes like triangles, this way we can make him more aggressive. We can feel the sense of danger emanating from him. Step 7: Character Map Once we are happy with the shapes we used on our sketch, we can start using them as a map so we can see were are we going with this. We need to see our sketch lines and figures carefully so we can erase the right ones and give some direction to others. It's kind of like reading a map, or at least I see it that way. If you don't want to erase those lines, then you can just make more visible the ones you need. By playing with your lines you'll define the figure of your character. We do the same thing with this guy. The only difference is that thanks to the more rigid and less rounded shapes we can start getting a more aggressive and dangerous general shape for our villain. Step 8: Overall Shape Now that we have a clean shape, we now see that our character looks kinda soft and friendly, this will help to give us an idea of what kind of character we are working with. We'll start giving him some details in the next step. As for the villain, we have a more important shape, like an authority figure. He has a shape with a rigid posture indicating he's not too friendly. We still need to give him more details to make him more evil, but we have a good start here with the overall shape. Step 9: Adding Initial Details Now that we have our general shape and we have the vibe we want to give to our character, we can start giving him the first details, like clothing. In this case I decided to do some kind of magical characters, like wizards. So I'll give him some loose, clean clothing, by doing this we give the sensation of pureness and freedom. Sandals similar to Jesus, I would like to think less possessions represent more humbleness. As I mentioned in earlier steps, the villain is more self secure and kinda cocky, thats why I'm going to give him a cape, like a count or something. Also I've given him no shirt, which shows his muscles. This way he can intimidate our hero, since our hero has a very normal build. Step 10: Adding Color As for the colors for our hero, I used the most obvious ones like the plain white robes, which sing of purity and cleanness. Similar to the hero, I'm going to use obvious colors for the villain. In this case, dark colors so we can give this tutorial a very clear example of what we are working with here: pitch black pants, boots and hair, dark red cape. Since our hero has very light colored skin, we want to contrast him, so I use a dark color for the villain's skin. Step 11: Refining Details Now that we have the colors assigned and most of the character done, we need to pay attention to some details and add a couple of more. This will help us give the final touches to our characters. For the hero we need to pay attention to the clothing, his robes are almost brand new, kinda like his inexperience in battle. This could be the case or not, but the main point of this is that he must look friendly and noble. That's why his robes are so well taken care off. Take note of these details: - The staff is rounded from top to bottom. - We can associate the hero with nature and all living things, especially mages in all those fantastic tales. That's why I add a leaf crown so it can contrast with some of the details we are going to add to the villain. - I don't see his knife as an offensive weapon. I see it as more of a tool of defense, since it's a very small knife. He could easily be carrying a large sword or something like that. - His expression is very friendly, not deviant looking like the next example. For the Villain The cape looks all torn apart, giving signals of battle experience. Also this gives us a not too friendly appearance. - He has many battle scars. He's been in many battles and he's not afraid of getting hurt. - Contrary to the hero, the villain is associated with terror and death, thats why I added a skull neckless, so he can strike fear in his opponents and give a sense of black magic to his normal activities. - He's carrying a big sword not just to defend himself, he uses it to hurt too. - The staff he's using is similar to the hero but this one looks less organic and more stylized. Its a big black spike for intimidation. He can also do damage to someone with it. - His expression is a very spooky grin, not the same as the hero. We may add that his eyes look a little lost, he has seen a lot of bad things and he started to like it. I hope this tutorial helps you create your own heroic and villainous characters. These characters are made from very basic elements. Some professional and well known characters are extremely complex, but I can assure you that in essence they have very similar characteristics we used to put together these two simple characters. I'm going to leave you with the two finished characters side by side for comparison.
27
Take the agony out of practicing math! This charming worksheet asks your child to rewrite and solve eight double-digit addition problems that involve carrying remainders. However, as an ending reward, your little one can complete a fun coloring activity! Who knew there was an entertaining way to boost his fine motor skills and addition ability? Need more practice with addition with carrying? Print out the full set of Color Me worksheets!
27
This Algebra 1 - Exponents Worksheet produces problems for working with different operations with exponents. You may select from exponents with multiplication or division and products or quotients to a power. Exponents with Multiplication Exponents with Division Products to a Power Quotients to a Power Both Positive and Negative Exponents Only Positive Exponents You may enter a message or special instruction that will appear on the bottom left corner of the Exponents Worksheet. Include Exponents Worksheet Answer Page Now you are ready to create your Exponents Worksheet by pressing the Create Button.
27
Get free worksheets plus activities, articles, and science projects. Let's make some nonsense! Using "Jabberwocky" by Lewis Carroll, a poem famous for its otherworldly wacky words, your child's imagination will run free as he makes his own nonsense. Already a member? Sign In Members receive Education.com emails. You can change email preferences in account settings.
27
Photonic Material May Facilitate All-Optical Switching And Computing A class of molecules whose size, structure and chemical composition have been optimized for photonic use could provide the demanding combination of properties needed to serve as the foundation for low-power, high-speed all-optical signal processing. All-optical switching could allow dramatic speed increases in telecommunications by eliminating the need to convert photonic signals to electronic signals ““ and back ““ for switching. All-optical processing could also facilitate photonic computers with similar speed advances. Details of these materials ““ and the design approach behind them ““ were reported February 18th in Science Express, the rapid online publication of the journal Science. Conducted at the Georgia Institute of Technology, the research was funded by the National Science Foundation (NSF), the Defense Advanced Research Projects Agency (DARPA) and the Office of Naval Research (ONR). “This work provides proof that at least from a molecular point of view, we can identify and produce materials that have the right properties for all-optical processing,” said Seth Marder, a professor in the Georgia Tech School of Chemistry and Biochemistry and co-author of the paper. “This opens the door for looking at this issue in an entirely different way.” The polymethine organic dye materials developed by the Georgia Tech team combine large nonlinear properties, low nonlinear optical losses, and low linear losses. Materials with these properties are essential if optical engineers are to develop a new generation of devices for low-power and high-contrast optical switching of signals at telecommunications wavelengths. Keeping data all-optical would greatly facilitate the rapid transmission of detailed medical images, development of new telepresence applications, high-speed image recognition ““ and even the fast download of high-definition movies. But favorable optical properties these new materials developed at Georgia Tech have only been demonstrated in solution. For their materials to have practical value, the researchers will have to incorporate them in a solid phase for use in optical waveguides ““ and address a long list of other challenges. “We have developed high-performing materials by starting with optimized molecules and getting the molecular properties right,” said co-author Joseph Perry, also a professor in the Georgia Tech School of Chemistry and Biochemistry. “Now we have to figure out how to pack them together so they have a high density and useful physical forms that would be stable under operation.” Marder, Perry and collaborators in Georgia Tech’s Center for Organic Photonics and Electronics (COPE) have been working on the molecules for several years, refining their properties and adding atoms to maximize their length without inducing symmetry breaking, a phenomenon in which unequal charges build up within molecules. This molecular design effort, which builds on earlier research with smaller molecules, included both experimental work ““ and theoretical studies done in collaboration with Jean-Luc Bredas, a also a professor in the School of Chemistry and Biochemistry. The design strategies identified by the research team ““ which also included Joel Hales, Jonathan Matichak, Stephen Barlow, Shino Ohira, and Kada Yesudas ““ could be applied to development of even more active molecules, though Marder believes the existing materials could be modified to meet the needs of all-optical processing “For this class of molecules, we can with a high-degree of reliability predict where the molecules will have both large optical nonlinearities and low two-photon absorption,” said Marder. “Not only can we predict that, but using well-established chemical principles, we can tune where that will occur such that if people want to work at telecommunications wavelengths, we can move to where the molecules absorb to optimize its properties.” Switching of optical signals carried in telecommunications networks currently requires conversion to electrical signals, which must be switched and then converted back to optical format. Existing electro-optical technology may ultimately be able to provide transmission speeds of up to 100 gigabits-per-second. However, all-optical processing could theoretically transmit data at speeds as high as 2,000 gigabits-per-second, allowing download of high-definition movies in minutes rather than hours. “Even if the frequency of signals coming and going is high, there is a latency that causes a bottleneck for the signals until the modulation and switching are done,” Perry explained. “If we can do that all optically, then that delay can be reduced. We need to get electronics out of the system.” Perry and Marder emphasize that many years of research remain ahead before their new materials will be practical. But they believe the approach they’ve developed charts a path toward all-optical systems. “While we have not made all-optical switches, what we have done is provide a fundamental understanding of what the systems are that could have the combined set of properties that would make this possible,” Marder said. “Conceptually, we have probably made it over the hump with this class of molecules. The next part of this work will be difficult, but it will not require a fundamental new understanding of the molecular structure.” This article is based on work supported in part by the STC program of the National Science Foundation under agreement DMR-0120967, the DARPA MORPH Program and ONR (N00014-04-0095 and N00014-06-1-0897) and the DARPA ZOE Program (W31P4Q-09-1-0012). The comments and opinions expressed are those of the researchers and do not necessarily represent the views of the NSF, DARPA or ONR. Writer: John Toon, Georgia Institute of Technology Image 1: Georgia Tech professor Seth Marder, center, is part of the team that developed a new photonic material that could facilitate all-optical signal processing. Credit: Photo: Rob Felt Image 2: Georgia Tech professor Seth Marder is part of the team that developed a new photonic material that could facilitate all-optical signal processing. Credit: Photo: Rob Felt Image 3: Georgia Tech professor Joseph Perry, left, is part of the team that developed a new photonic material that could facilitate all-optical signal processing. Credit: Photo: Georgia Tech On the Net:
28
A topological insulator is a material with time reversal symmetry and non-trivial topological order, that behaves as an insulator in its interior but whose surface contains conducting states, meaning that electrons can only move along the surface of the material. Although ordinary band insulators can also support conductive surface states, the surface states of topological insulators are special since they are symmetry protected by particle number conservation and time reversal symmetry. In the bulk of a non-interacting topological insulator, the electronic band structure resembles an ordinary band insulator, with the Fermi level falling between the conduction and valence bands. On the surface of a topological insulator there are special states that fall within the bulk energy gap and allow surface metallic conduction. Carriers in these surface states have their spin locked at a right-angle to their momentum (spin-momentum locking). At a given energy the only other available electronic states have different spin, so the "U"-turn scattering is strongly suppressed and conduction on the surface is highly metallic. Non-interacting topological insulators are characterized by an index (known as Z2 topological invariants) similar to the genus in topology. The "protected" conducting states in the surface are required by time-reversal symmetry and the band structure of the material. The states cannot be removed by surface passivation if it does not break the time-reversal symmetry. Prediction and discovery Time-reversal symmetry protected edge states were predicted in 1987 to occur in quantum wells (very thin layers) of mercury telluride sandwiched between cadmium telluride and were observed in 2007. In 2007, they were predicted to occur in three-dimensional bulk solids of binary compounds involving bismuth. A 3D "strong topological insulator" exists which cannot be reduced to multiple copies of the quantum spin Hall state. The first experimentally realized 3D topological insulator state (symmetry protected surface states) was discovered in bismuth antimonide. Shortly thereafter symmetry protected surface states were also observed in pure antimony, bismuth selenide, bismuth telluride and antimony telluride using ARPES. Many semiconductors within the large family of Heusler materials are now believed to exhibit topological surface states. In some of these materials the Fermi level actually falls in either the conduction or valence bands due to naturally occurring defects, and must be pushed into the bulk gap by doping or gating. The surface states of a 3D Topological insulator is a new type of 2DEG (two-dimensional electron gas) where electron's spin is locked to its linear momentum. In 2012 several groups released preprints which suggest that samarium hexaboride has the properties of a topological insulator in accordance with the earlier theoretical predictions. Since samarium hexaboride is an established Kondo insulator, i.e. a strongly correlated electron material, the existence of a topological surface state in this material would lead to a topological insulator with strong electronic correlations. Properties and applications The spin momentum locking in the topological insulator allows symmetry protected surface states to host Majorana particles if superconductivity is induced on the surface of 3D topological insulators via proximity effects. (Note that Majorana zero-mode can also appear without 3D topological insulators.) The non-trivialness of topological insulators is encoded in the existence of a gas of helical Dirac fermions. Helical Dirac fermions, which behave like massless relativistic particles, have been observed in 3D topological insulators. Note that the gapless surface states of topological insulator differ from those in the Quantum Hall effect: the gapless surface states of topological insulator are symmetry protected (i.e. not topological), while the gapless surface states in Quantum Hall effect are topological (i.e. robust against any local perturbations that can break all the symmetries). The Z2 topological invariants cannot be measured using traditional transport methods, such as spin Hall conductance, and the transport is not quantized by the Z2 invariants. An experimental method to measure Z2 topological invariants was demonstrated which provide a measure of the Z2 topological order. (Note that the term Z2 topological order has also been used to describe the topological order with emergent Z2 gauge theory discovered in 1991.) - Kane, C. L.; Mele, E. J. (30 September 2005). "Z2 Topological Order and the Quantum Spin Hall Effect". Physical Review Letters 95 (14): 146802. arXiv:cond-mat/0506581. Bibcode:2005PhRvL..95n6802K. doi:10.1103/PhysRevLett.95.146802. - Zheng-Cheng Gu and Xiao-Gang Wen Tensor-Entanglement-Filtering Renormalization Approach and Symmetry Protected Topological Order Phys. Rev. B80, 155131 (2009). - Pollmann, F.; Berg, E.; Turner, Ari M.; Oshikawa, Masaki (2012). "Symmetry protection of topological phases in one-dimensional quantum spin systems". Phys. Rev. B 85 (7): 075125. arXiv:0909.4059. Bibcode:2012PhRvB..85g5125P. doi:10.1103/PhysRevB.85.075125. - Xie Chen, Zheng-Cheng Gu, Xiao-Gang Wen, Classification of Gapped Symmetric Phases in 1D Spin Systems Phys. Rev. B 83, 035107 (2011); Xie Chen, Zheng-Xin Liu, Xiao-Gang Wen, 2D symmetry protected topological orders and their protected gapless edge excitations Phys. Rev. B 84, 235141 (2011) - Pankratov, O.A.; Pakhomov, S.V.; Volkov, B.A. (January 1987). "Supersymmetry in heterojunctions: Band-inverting contact on the basis of Pb1-xSnxTe and Hg1-xCdxTe". Solid State Communications 61 (2): 93–96. doi:10.1016/0038-1098(87)90934-3. - König, Markus; Wiedmann, Steffen; Brüne, Christoph; Roth, Andreas; Buhmann, Hartmut; Molenkamp, Laurens W.; Qi, Xiao-Liang; Zhang, Shou-Cheng (2007-11-02). "Quantum Spin Hall Insulator State in HgTe Quantum Wells". Science 318 (5851): 766–770. arXiv:0710.0582. Bibcode:2007Sci...318..766K. doi:10.1126/science.1148047. PMID 17885096. Retrieved 2010-03-25. - Fu, Liang; C. L. Kane (2007-07-02). "Topological insulators with inversion symmetry". Physical Review B 76 (4): 045302. arXiv:cond-mat/0611341. Bibcode:2007PhRvB..76d5302F. doi:10.1103/PhysRevB.76.045302. Retrieved 2010-03-26. Shuichi Murakami (2007). "Phase transition between the quantum spin Hall and insulator phases in 3D: emergence of a topological gapless phase". New Journal of Physics 9 (9): 356–356. arXiv:0710.0930. Bibcode:2007NJPh....9..356M. doi:10.1088/1367-2630/9/9/356. ISSN 1367-2630. Retrieved 2010-03-26. - Kane, C. L.; Moore, J. E. (2011). "Topological Insulators". Physics World 24: 32. - Hsieh, D.; D. Qian, L. Wray, Y. Xia, Y. S. Hor, R. J. Cava & M. Z. Hasan (2008). "A Topological Dirac insulator in a quantum spin Hall phase". Nature 452 (9): 970–974. arXiv:0902.1356. Bibcode:2008Natur.452..970H. doi:10.1038/nature06843. PMID 18432240. Retrieved 2010. - Hasan, M.Z.; Kane, C.L. (2010). "Topological Insulators". Review of Modern Physics 82 (4): 3045. arXiv:1002.3895. Bibcode:2010RvMP...82.3045H. doi:10.1103/RevModPhys.82.3045. Retrieved 2010-03-25. - Chadov, Stanislav; Xiao-Liang Qi, Jürgen Kübler, Gerhard H. Fecher, Claudia Felser, Shou-Cheng Zhang (July 2010). "Tunable multifunctional topological insulators in ternary Heusler compounds". Nature Materials 9 (7): 541–545. arXiv:1003.0193. Bibcode:2010NatMa...9..541C. doi:10.1038/nmat2770. Retrieved 2010-08-05. - Lin, Hsin; L. Andrew Wray; Yuqi Xia; Suyang Xu; Shuang Jia; Robert J. Cava; Arun Bansil; M. Zahid Hasan (July 2010). "Half-Heusler ternary compounds as new multifunctional experimental platforms for topological quantum phenomena". Nat Mater 9 (7): 546–549. arXiv:1003.0155. Bibcode:2010NatMa...9..546L. doi:10.1038/nmat2771. ISSN 1476-1122. PMID 20512153. Retrieved 2010-08-05. - Hsieh, D.; Y. Xia, D. Qian, L. Wray, F. Meier, J. H. Dil, J. Osterwalder, L. Patthey, A. V. Fedorov, H. Lin, A. Bansil, D. Grauer, Y. S. Hor, R. J. Cava, M. Z. Hasan (2009). "Observation of Time-Reversal-Protected Single-Dirac-Cone Topological-Insulator States in Bi2Te3 and Sb2Te3". Physical Review Letters 103 (14): 146401. Bibcode:2009PhRvL.103n6401H. doi:10.1103/PhysRevLett.103.146401. PMID 19905585. Retrieved 2010-03-25. - Noh, H.-J.; H. Koh, S.-J. Oh, J.-H. Park, H.-D. Kim, J. D. Rameau, T. Valla, T. E. Kidd, P. D. Johnson, Y. Hu and Q. Li (2008). "Spin-orbit interaction effect in the electronic structure of Bi2Te3 observed by angle-resolved photoemission spectroscopy". EPL Europhysics Letters 81 (5): 57006. arXiv:0803.0052. Bibcode:2008EL.....8157006N. doi:10.1209/0295-5075/81/57006. Retrieved 2010-04-25. - Hsieh, D.; Xia, Y.; Qian, D.; Wray, L.; Dil, J. H.; Meier, F.; Osterwalder, J.; Patthey, L.; Checkelsky, J. G.; Ong, N. P.; Fedorov, A. V.; Lin, H.; Bansil, A.; Grauer, D.; Hor, Y. S.; Cava, R. J.; Hasan, M. Z. (2009). "A tunable topological insulator in the spin helical Dirac transport regime". Nature 460 (7259): 1101–1105. arXiv:1001.1590. Bibcode:2009Natur.460.1101H. doi:10.1038/nature08234. PMID 19620959. - Eugenie Samuel Reich. "Hopes surface for exotic insulator". Nature. - Dzero, V.; K. Sun; V. Galitski; P. Coleman (2009). "Topological Kondo Insulators". Physical Review Letters 104 (10): 106408. arXiv:0912.3750. Bibcode:2010PhRvL.104j6408D. doi:10.1103/PhysRevLett.104.106408. Retrieved 2013-01-06. - "Weird materials could make faster computers". Science News. doi:10.1038/nature13534. Retrieved 2014-07-23. - Fu, L.; C. L. Kane (2008). "Superconducting Proximity Effect and Majorana Fermions at the Surface of a Topological Insulator". Phys. Rev. Lett. 100: 096407. arXiv:0707.1692. Bibcode:2008PhRvL.100i6407F. doi:10.1103/PhysRevLett.100.096407. Retrieved 2010. - Topological Superconductivity and Majorana Fermions in Metallic Surface-States Andrew C. Potter, Patrick A. Lee, Phys. Rev. B 85, 094516 (2012) arXiv:1201.2176 - Hsieh, D.; D. Hsieh, Y. Xia, L. Wray, D. Qian, A. Pal, J. H. Dil, F. Meier, J. Osterwalder, C. L. Kane, G. Bihlmayer, Y. S. Hor, R. J. Cava and M. Z. Hasan (2009). "Observation of Unconventional Quantum Spin Textures in Topological Insulators". Science 323 (5916): 919–922. Bibcode:2009Sci...323..919H. doi:10.1126/science.1167733. Retrieved 2010. - N. Read and Subir Sachdev, Large-N expansion for frustrated quantum antiferromagnets, Phys. Rev. Lett. 66 1773 (1991) - Xiao-Gang Wen, Mean Field Theory of Spin Liquid States with Finite Energy Gaps, Phys. Rev. B 44 2664 (1991). - Hasan, M. Z.; Kane, C. L. (2010). "Topological Insulators". Reviews of Modern Physics 82 (4): 3045. arXiv:1002.3895. Bibcode:2010RvMP...82.3045H. doi:10.1103/RevModPhys.82.3045. - Kane, C. L. (2008). "Topological Insulator: An Insulator with a Twist". Nature 4 (5): 348. Bibcode:2008NatPh...4..348K. doi:10.1038/nphys955. - Witze, A. (2010). "Topological Insulators: Physics On the Edge". Science News. - Brumfiel, G. (2010). "Topological insulators: Star material : Nature News". Nature 466 (7304): 310–311. doi:10.1038/466310a. PMID 20631773. - Murakami, Shuichi (2010). "Focus on Topological Insulators". New Journal of Physics. - What’s a Topological Insulator? - "Topological Insulators," by Joel E. Moore, IEEE Spectrum, July 2011
28
Whenever the prefix nano- appears, referring to any manipulation of matter at near-molecular levels, controversy follows. Opponents of such techniques hold in particular that they shouldn’t be used in foodstuffs until we know much more about their effects on human bodies. Nanofood refers to the employment of nanotechnological techniques in any part of the food chain — cultivation, production, processing or packaging — not just in food itself. Big companies are researching the possibilities, some of which sound like science-fiction — smart dust that’s inserted into plants and animals so that farmers can monitor their health in real time; packaging that includes smart sensors that can sniff out gases given off by deteriorating food or alternatively tell you when it is ripe; a drink whose flavour can be changed just by microwaving it; and stabilise nutrients in food, such as omega-3 fats, iron or vitamins — which degrade quickly in storage — by enclosing them in separate tiny containers. The only foodstuffs currently available that have been modified through nanotechnology are a few nutritional supplements, but this is expected to change within a year or two. The word first came to public attention as the title of a report of in 2004 by a German firm, the Helmut Kaiser Consultancy. It is in the news because of another report, published by Friends of the Earth in March 2008, which takes an extremely sceptical view of the technology and the likelihood of it being accepted by consumers. Food packaging using nanotechnology is more advanced than nanofoods, with products on the market that incorporate nanomaterials that scavenge oxygen, fight bacteria, keep in moisture or sense the state of the food. Sydney Morning Herald, 27 Mar. 2008 But while the food industry is hooked on nanotech’s promises, it is also very nervous. For if British consumers are sceptical about GM foods, then they are certainly not ready for nanofood. Daily Mail, 20 Jan. 2007 Search World Wide Words Recently added or updated Volleyballene; Trove; Smithereens; Worry wart; Punch list; Verbigeration; Heliotrope; Ditty bag; E30; Old fogey; Ampersand; Phizzog; Horse creature; Get one’s goat; Mammock; Mx; Stepney; Vape; No names, no pack drill. Support World Wide Words! Donate via PayPal. Select your currency from the list and click Donate. Buy from Amazon and get me a small commission at no cost to you. Select your preferred site and click Go!
28
Hoping to find new ways of addressing environmental pollution, a physicist at the University of Wisconsin-Milwaukee (UWM) has developed some novel ways to observe what happens inside a cell when it comes in contact with contaminants or when toxic substances touch soil and water. An object's molecules and electrons are always in motion, vibrating and wiggling. Carol Hirschmugl, an associate professor of physics, tracks what happens to molecules when they meet the surface of a particular material or move around in a living cell by taking advantage of these vibrations and using them to map the movement of chemicals within the molecules. Before she can witness any action, though, she has to detect all the parts involved. Using a device called a synchrotron, Hirschmugl can probe what she could not with a normal microscope. The synchrotron emits energy at all spectral frequencies, from infrared (IR) to X-rays. The light emitted by IR, which is what Hirschmugl uses, is intense, but not visible with the human eye. IR reveals the vibrations of molecules in a cell, which act as "signatures," allowing Hirschmugl to identify the material she's working with. She is using the technique to observe how algae digest carbon dioxide (and give off oxygen), something that has implications for controlling air pollution. In her work with algae, she studies the distribution of proteins, lipids and carbohydrates, molecules that play a major role in metabolizing the organism's food (photosynthesis). It's important in fully understanding a process that is so vitally linked to human respiration and environmental health. "Since the alga uses up a lot of CO2," she says, "what we're interested in is what happens when you change its environmental conditions. We want to look at how its biological makeup changes when exposed to say, runoff pollution." Recently funded by the National Science Foundation, Hirschmugl will be developing new ways to "see" how alga reacts to its environment. "Then, I'm taking the question one step further and seeing how the distribution of its parts changes because of interactions with nitrates or ammonium, which come from fertilizer runoff or sewage." Her ultimate goal is to see the internal changes actually take place in a living sample. Electrons behaving madly In a second imaging project, Hirschmugl observes the arrangement of specific molecules on a solid surface, again enlisting the wave properties of electrons. "What we are looking at is way smaller than the wavelength of light," says UWM physicist Dilano Saldin, who collaborates with Hirschmugl. "It can't be seen with the eye. So we need to study the energy distribution from electrons scattered from the surface." The technique Hirschmugl uses is a modified method of low-energy electron diffraction (LEED). By shooting a minute beam of electrons onto a surface, and using a sensitive detection plate, she creates a visual picture of the electrons as they are spread out in all directions and eventually hit the plate. After sophisticated analysis, the resulting pattern can reveal the structure of the surface material. Why go to all this trouble? To reveal the workings of the atomic world, says Saldin, whose expertise includes the interpretation of the patterns made by the scattered electrons. Since something as tiny as a molecule cannot be seen, it's is difficult to observe its behavior under various conditions. And changes are happening. And at the atomic level, the interplay of materials at the surface can cause unusual molecular rearrangements that alter the way the materials behave. And most interactions of a solid with its environment take place at the surface. This kind of transformation is behind the process of corrosion in metals, for example. The aim of Hirschmugl's surface studies is to examine the behavior of water molecules when they come in contact with an oxide surface, such as soil. The dynamics of this are not well understood, but could be valuable in determining how contaminants flow through soil. Driving Hirschmugl's inquiry is the fact that water and soil interactive in unpredictable ways, depending on which atoms in the water are touching the oxide surface. "Water and soil present a really different interface," she says. "I want to know what happens next. Do the water molecules break down or do they remain intact? "With these techniques, we're getting access to the dynamics of the molecules and the statics (location) at the same time." Source: University of Wisconsin - Milwaukee Explore further: Engineers develop new methods to speed up simulations in computational grand challenge
28
Quantum computers are computers that exploit the weird properties of matter at extremely small scales. Many experts believe that a full-blown quantum computer could perform calculations that would be hopelessly time consuming on classical computers, but so far, quantum computers have proven devilishly hard to build. The few simple prototypes developed in the lab perform such rudimentary calculations that it’s sometimes difficult to tell whether they’re really harnessing quantum effects at all. At the Association for Computing Machinery’s 43rd Symposium on Theory of Computing in June, associate professor of computer science Scott Aaronson and his graduate student Alex Arkhipov will present a paper describing an experiment that, if it worked, would offer strong evidence that quantum computers can do things that classical computers can’t. Although building the experimental apparatus would be difficult, it shouldn’t be as difficult as building a fully functional quantum computer. If the experiment works, “it has the potential to take us past what I would like to call the 'quantum singularity,' where we do the first thing quantumly that we can’t do on a classical computer,” says Terry Rudolph, an advanced research fellow with Imperial College London’s Quantum Optics and Laser Science, who was not involved in the research. Aaronson and Arkhipov's proposal is a variation on an experiment conducted by physicists at the University of Rochester in 1987, which relied on a device called a beam splitter, which takes an incoming beam of light and splits it into two beams traveling in different directions. The Rochester researchers demonstrated that if two identical light particles — photons — reach the beam splitter at exactly the same time, they will both go either right or left; they won’t take different paths. It’s another of the weird quantum behaviors of fundamental particles that defy our physical intuitions. The MIT researchers' experiment would use a larger number of photons, which would pass through a network of beam splitters and eventually strike photon detectors. The number of detectors would be somewhere in the vicinity of the square of the number of photons — about 36 detectors for six photons, 100 detectors for 10 photons. For any run of the MIT experiment, it would be impossible to predict how many photons would strike any given detector. But over successive runs, statistical patterns would begin to build up. In the six-photon version of the experiment, for instance, it could turn out that there’s an 8 percent chance that photons will strike detectors 1, 3, 5, 7, 9 and 11, a 4 percent chance that they’ll strike detectors 2, 4, 6, 8, 10 and 12, and so on, for any conceivable combination of detectors. Calculating that distribution — the likelihood of photons striking a given combination of detectors — is an incredibly hard problem. The researchers’ experiment doesn’t solve it outright, but every successful execution of the experiment does take a sample from the solution set. One of the key findings in Aaronson and Arkhipov’s paper is that, not only is calculating the distribution an intractably hard problem, but so is simulating the sampling of it. For an experiment with more than, say, 100 photons, it would probably be beyond the computational capacity of all the computers in the world. The question, then, is whether the experiment can be successfully executed. The Rochester researchers performed it with two photons, but getting multiple photons to arrive at a whole sequence of beam splitters at exactly the right time is more complicated. “It’s challenging, technologically, but not forbiddingly so,” says Barry Sanders, director of the University of Calgary’s Institute for Quantum Information Science. Sanders points out that in 1987, when the Rochester researchers performed their initial experiment, they were using lasers mounted on lab tables and getting photons to arrive at the beam splitter simultaneously by sending them down fiber-optic cables of different lengths. But recent years have seen the advent of optical chips, in which all the optical components are etched into a silicon substrate, which makes it much easier to control the photons’ trajectories. The biggest problem, Sanders believes, is generating individual photons at predictable enough intervals to synchronize their arrival at the beam splitters. “People have been working on it for a decade, making great things,” Sanders says. “But getting a train of single photons is still a challenge.” Rudolph agrees. “At the moment, the hard thing is getting enough single photons into the chip,” he says. But, he adds, “my hope is that within a few years, we’ll manage to build the experiment that crosses the boundary of what we can practically do with classical computers.” Sanders points out that even if the problem of getting single photons onto the chip is solved, photon detectors still have inefficiencies that could make their measurements inexact: in engineering parlance, there would be noise in the system. But Aaronson says that he and Arkhipov explicitly consider the question of whether simulating even a noisy version of their optical experiment would be an intractably hard problem for a conventional computer. Although they were unable to prove that it was, Aaronson says that “most of our paper is devoted to giving evidence that the answer to that is yes.” He’s hopeful that a proof is forthcoming, whether from his research group or others’.
28
Rice University Scientists have done it. After BMW announced the possibility of producing a car that would utilize nanotechnology practically for all functions, Rice University scientists developed the world’s first single-molecule car- the car that was driven on a gold microscopic highway. It a small coupe that is devoid of any plush seating or conventional steering system. But it is a real solution for the grid locked cities. With a wheelbase of less than 5 nm, parking it is a cakewalk. Image credit: Y. Shira/Rice University According to Professor Tour this development is a watershed in so far as constructing successfully a nanocar represents the first step toward molecular manufacturing. Professor Tour avers: “It’s the beginning of learning how to manipulate things at the nanolevel in nonbiological systems.” The nanocar consists of a chassis and axles made of well-defined organic groups with pivoting suspension and freely rotating axles. The wheels are buckyballs, spheres of pure carbon containing 60 atoms apiece. The entire car measures just 3-4 nanometers across, making it slightly wider than a strand of DNA. A human hair, by comparison, is about 80,000 nanometers in diameter. When the project was initiated, the team was able to assemble the chassis and axles in just six months. But the fixing of the fullerene wheels proved far a more difficult task to be completed. It was primarily because according to the scientists the material to be used -fullerenes - shut down reactions mediated by transition-metal catalysts. Ultimately the team decided to synthesize the axle and chassis via palladium-catalyzed coupling reactions. Attaching the wheels had to be the last step of the synthesis, but getting four fullerenes onto the molecule in sufficiently high yield was not a trivial task. They found the nanocar was quite stable on the surface remaining parked until the surface was heated above 170 °C - presumably because of strong adhesion between the fullerene wheels and the underlying gold. Flat gold surface was used to prevent the nanocar actually roll around on its fullerene wheels, rather than slip like a car on ice. Between 170 °C and 225 °C, the researchers observed that the nanocar moved around by translational motion and pivoting. The translational motion was always in a direction perpendicular to the handcar’s axle, indicating that it moves by rolling rather than sliding. Placing an STM tip in front of the car and pulling it forward control the movement of an individual car. The group subsequently built a nanotruck that can transport molecular cargo as well as a light-driven motorized nanocar. The development bodes opening new vistas. Dr. Bikram Lamba, an international management consultant, is Chairman & Managing Director of Tormacon Limited- a multi-disciplinary consultancy organization. He can be contacted at 905 848 4205. Email: [email protected], site: www.torconsult.com. By Bikram Lamba, Copyright 2005 PhysOrg.com Source: Rice University Explore further: Designer's toolkit for dynamic DNA nanomachines
28
Michito Yoshizawa, Zhiou Li, and collaborators at Tokyo Institute of Technology synthesized ~1 nanometer-sized molecular capsules with an isolated cavity using green and inexpensive zinc and copper ions. In sharp contrast to previous molecular capsules and cages composed of precious metal ions such as palladium and platinum, these nanocapsules emit blue fluorescence with 80% efficiency. Molecular nanocapsules have potential applications as photo-functional compounds and materials but so far molecular capsules synthesized by incorporating palladium ions and so on exhibit poor fluorescence. The Tokyo Tech researchers expect to be able to prepare multicolor fluorescence composites by the simple insertion of appropriate fluorescent molecules into the isolated cavity of the nanocapsules. Fluorescence has widespread applications, helping researchers to understand issues in the fundamental sciences and develop practical materials and devices. Among the useful fluorescent compounds in development, capsule-shaped molecular architectures, which possess both strong fluorescent properties and a nanometer-sized cavity, are particularly promising. Molecular cages and capsules can be prepared through a simple synthetic process called coordinative self-assembly. However, most of them are composed of precious metal ions such as palladium and platinum, and are non-emissive due to quenching by the heavy metals. Now, Michito Yoshizawa, Zhiou Li, and co-workers from the Chemical Resources Laboratory at Tokyo Institute of Technology report novel molecular nanocapsules with the M2L4 composition (where M represents zinc, copper, platinum, palladium, nickel, cobalt, and manganese). Their zinc and copper capsules, in particular, display unique fluorescent properties. The M2L4 capsules self-assemble from two metal ions and four bent ligands that include anthracene fluorophores (fluorescent parts). X-ray crystallographic analysis verified the closed shell structures where the large interior cavities of the capsules, around one nanometer in diameter, are shielded by eight anthracene panels. The zinc capsule emitted strong blue fluorescence with a high quantum yield (80%), in sharp contrast to the weakly emissive nickel and manganese capsules and the non-emissive palladium, platinum, and cobalt capsules. The fluorescence of the copper capsule, on the other hand, depends on the solvent; for example, it shows blue emission in dimethyl sulfoxide but no emission in acetonitrile. This study is the first to show such emissive properties of molecular capsules bearing an isolated large cavity. The researchers believe their nanocapsules could have novel applications in devices such as chemosensors, biological probes, and light-emitting diodes. Explore further: Surface-modified nanoparticles endow coatings with combined properties More information: Zhiou Li, et al., Isostructural M2L4 Molecular Capsules with Anthracene Shells: Synthesis, Crystal Structures, and Fluorescent Properties, Chemistry - A European Journal, 18, 8358 (2012). DOI: 10.1002/chem.201200155
28
A group of physicists from the United States announces the development of a new method of producing hydrogen fuel cells. The approach does not require the use of scarce and expensive platinum, and this could significantly contribute to boosting this field of research. Fuel cells have been touted as the next big thing in technology for several decades, yet progress in this area has been slow. One of the main reasons for that is the fact that some of the elements needed to create the energy sources are very expensive. This automatically raises the prices of the fuel cells themselves, making them inaccessible to the general public, and in no way an alternative to fossil fuels. However, employing fuel cells would reduce pollution considerably, as they only generate water as a byproduct. In their new study, experts with the US Department of Energy (DOE) Los Alamos National Laboratory (LANL) managed to identify a platinum-free catalyst that can be used for the same purposes as its platinum-based counterpart. The work was led by LANL experts Gang Wu, Christina Johnston, and Piotr Zelenay, who collaborated closely with colleagues from the DOE Oak Ridge National Laboratory (ORNL), led by Karren More. The scientists explains that, at more than $1,800 per ounce, platinum is driving fuel cell costs way up. If the industry were to adopt platinum as a standard material, than increased demand would drive costs up even further. In the new catalyst, the reactions that are usually carried out thanks to platinum take place when triggered by a combination of carbon, iron and cobalt. This mix successfully replaces the precious chemical, and produces similar levels of efficiency. At the same time, the production of hydrogen peroxide – an undesirable compound whose creation diminishes the energy output of fuel cells – is maintained at very low levels, so there are multiple advantages to using the new carbon-iron-cobalt catalyst, the group explains. “The encouraging point is that we have found a catalyst with a good durability and life cycle relative to platinum-based catalysts,” explains Zelenay. He is the corresponding author of a new paper describing the catalyst, which was published in the April 22 issue of the top journal Science. “For all intents and purposes, this is a zero-cost catalyst in comparison to platinum, so it directly addresses one of the main barriers to hydrogen fuel cells,” the expert goes on to say.
28
The 2010 Nobel Prize in Physics went to the two scientists who first isolated graphene, one-atom-thick crystals of graphite. Now, a researcher with the University of Houston Cullen College of Engineering is trying to develop a method to mass-produce this revolutionary material. Graphene has several properties that make it different from literally everything else on Earth: it is the first two-dimensional material ever developed; the world's thinnest and strongest material; the best conductor of heat ever found; a far better conductor of electricity than copper; it is virtually transparent; and is so dense that no gas can pass through it. These properties make graphene a game changer for everything from energy storage devices to flat device displays. Most importantly, perhaps, is graphene's potential as a replacement for silicon in computer chips. The properties of graphene would enable the historical growth in computing power to continue for decades to come. To realize these benefits, though, a way to create plentiful, defect-free graphene must be developed. Qingkai Yu, an assistant research professor with the college's department of electrical and computer engineering and the university's Center for Advanced Materials, is developing methods to mass-produce such high-quality graphene. Yu is using a technology known as chemical vapor deposition. During this process, he heats methane to around 1000 degrees Celsius, breaking the gas down into its building blocks of carbon and hydrogen atoms. The carbon atoms then attach to a metallic surface to form graphene. "This approach could produce cheap, high-quality graphene on a large scale," Yu said. Yu first demonstrated the viability of chemical vapor deposition for graphene creation two years ago in a paper in the journal Applied Physics Letters. He has since continued working to perfect this method. Yu's initial research would often result in several layers of graphene stacked together on a nickel surface. He subsequently discovered the effectiveness of copper for graphene creation. Copper has since been adopted by graphene researchers worldwide. Yu's work is not finished. The single layers of graphene he is now able to create are formed out of multiple graphene crystals that join together as they grow. The places where these crystals combine, known as the grain boundaries, are defects that limit the usefulness of graphene, particularly as a replacement for silicon-based computer chips. Yu is attempting to create large layers of graphene that form out of a single crystal. "You can imagine how important this sort of graphene is," said Yu. "Semiconductors became a multibillion-dollar industry based on single-crystal silicon and graphene is called the post-silicon-era material. So single-crystal graphene is the Holy Grail for the next age of semiconductors." Explore further: Natural nanocrystals shown to strengthen concrete
28
New NIST Imaging Tool Has X-Ray Eyes Researchers at the National Institute of Standards and Technology have developed a new way of seeingwith X-ray "eyes" no less. Using its novel instrument, the NIST team can clearly glimpse minute voids, tiny cracks and other sometimes indiscernible microstructural details over a three-dimensional expanse in a wide range of materials, including metals, ceramics and biological specimens. In its current form, the technologycalled ultra-small-angle X-ray scattering or USAXS imagingfunctions much like a film camera, albeit a highly specialized one. And where a camera needs a flash to create images, USAXS has the ultimate flashthe Advanced Photon Source at the Argonne National Laboratory. Measuring 1,104 meters (nearly 0.7 mile) around, the APS is a new-generation synchrotron. It produces an abundance of extremely uniform high-energy X-rays that make the new imaging technique work. USAXS itself is an already established research technique, yielding plots of data points that correspond to angles and intensities of X-rays scattered by a specimen. With the new system, graphed curves become high-resolution pictures. And when taken from different perspectives, pictures can be assembled into three-dimensional images. Images are actually maps of the small fraction of X-rays thatinstead of being absorbed or transmitted through the sampleare scattered by electrons in the material. Source: National Institute Of Standards And Technology. December 2001. rating: 0.00 from 0 votes | updated on: 16 Nov 2006 | views: 773 |
28
Home > News > Coming: Superthread From Nanofibers December 8th, 2003 Coming: Superthread From Nanofibers Cylindrical molecules of carbon known as nanotubes are the strongest material known, and scientists have now spun yards of thread made of almost 100 percent nanotube. In the future, these threads could be woven into fabrics that stop bullets or be wound into cables many times as strong as steel. For now, though, the threads are less than the sum of their nanotube parts. "There are still defects," said Dr. Matteo Pasquali, a professor of chemical engineering at Rice University and head of the research team there that spun the threads. From tobacco to cyberwood March 31st, 2015 Wrapping carbon nanotubes in polymers enhances their performance: Scientists at Japan's Kyushu University say polymer-wrapped carbon nanotubes hold much promise in biotechnology and energy applications March 30th, 2015 Carbon nanotube fibers make superior links to brain: Rice University invention provides two-way communication with neurons March 25th, 2015 Iranian Scientists Eliminate Expensive Materials from Diabetes Diagnosis Sensors March 25th, 2015
28
- freely available Molecules 2012, 17(12), 14067-14090; doi:10.3390/molecules171214067 Published: 28 November 2012 Abstract: Fluorescence, the absorption and re-emission of photons with longer wavelengths, is one of those amazing phenomena of Nature. Its discovery and utilization had, and still has, a major impact on biological and biomedical research, since it enables researchers not just to visualize normal physiological processes with high temporal and spatial resolution, to detect multiple signals concomitantly, to track single molecules in vivo, to replace radioactive assays when possible, but also to shed light on many pathobiological processes underpinning disease states, which would otherwise not be possible. Compounds that exhibit fluorescence are commonly called fluorochromes or fluorophores and one of these fluorescent molecules in particular has significantly enabled life science research to gain new insights in virtually all its sub-disciplines: Green Fluorescent Protein. Because fluorescent proteins are synthesized in vivo, integration of fluorescent detection methods into the biological system via genetic techniques now became feasible. Currently fluorescent proteins are available that virtually span the whole electromagnetic spectrum. Concomitantly, fluorescence imaging techniques were developed, and often progress in one field fueled innovation in the other. Impressively, the properties of fluorescence were utilized to develop new assays and imaging modalities, ranging from energy transfer to image molecular interactions to imaging beyond the diffraction limit with super-resolution microscopy. Here, an overview is provided of recent developments in both fluorescence imaging and fluorochrome engineering, which together constitute the “fluorescence toolbox” in life science research. Luminescence is one of those exquisite and amazing phenomena that Nature has to offer. Luminescence (Latin: Lumen = light), whether man-made or created by Nature itself, is the only phenomenon that lightens up “life”, besides the visible radiation from combustion and photon emission as a consequence of nuclear fusion of hydrogen by the yellow dwarf star (G2V) that shines on our planet. Interestingly, a photon produced in the core of the sun will take thousands of years to travel from the core to the surface, but only 8 minutes to reach our planet. Commonly, people identify at least two types of luminescence: fluorescence and phosphorescence. However, luminescent processes comprise a large group of related phenomena that have purely physical, chemical, and/or biological/biochemical origins (Table 1). |Table 1. Overview of various luminescent phenomena.| For instance, bioluminescence (Figure 1), the emission of light by organisms, can be found in various cephalopods of the order Teuthida (squid), numerous members of the phylum Cnidaria (jellyfish), and the Lampyridae (fireflies), which all produce light through chemical reactions (chemiluminescence). One of the more astonishing spectacles that may be observed is the nightly glowing of water (Figure 1F), caused by Noctiluca scintillans (commonly known as Sea Sparkle), a non-parasitic marine-dwelling dinoflagellate species that is commonly found in shallow waters along the coast and in estuaries. Its bioluminescence stems from a luciferin-luciferase system that is concentrated in spherical organelles (microsources) within the cytosol and is a reaction to mechanical stimulation of N. scintillans ; similar chemical mechanisms are at work in fireflies and other organisms. Generally, deep-sea organisms use their luminescent properties to lure prey, to communicate, to find a mate, or as a defense mechanism, and it is estimated that only a small fraction of the luminescent creatures that dwell in and below the mesopelagic zone (>200–1,000 m depth) of the world’s oceans have been discovered to date. Amongst these bioluminescent entities, however, there is one organism that gained world fame after a fluorescent protein was discovered that has revolutionized life science research: the hydromedusa Aequorea victoria, which contains the now famous “Green Fluorescent Protein” (GFP) [2,3,4,5,6,7]. Aequorea victoria produces light by the quick release of calcium ions, thereby activating the photoprotein aequorin, which in turn excites GFP. Aequorin is a complex of the 21.4 kDa apoprotein, dioxygen and coelenterazine (luciferin), which results in the oxidative formation of coelenteramide (excited state) and blue emittance at 470 nm when returning to the ground state. Subsequently, GFP absorbs coelenteramide’s blue emission and emits at 505 nm in the green; hence the name Green Fluorescent Protein. The biological function of bioluminescence in jellyfish–bear in mind that continuous overall corporal bioluminescence, as often mistakenly assumed from photographic images (Figure 1B), has not been observed to date–is not well understood, but it is assumed that in some jellyfish species it is used to find a mate or for defense purposes. In other creatures, complex and rapidly changing bioluminescent patterns may be observed, which might constitute a form of communication that currently eludes us. Some squid species have been observed to use bioluminescent flashes to stun potential prey, as recently recorded for the giant squid Taningia danae . Luminescent phenomena are not limited to biological systems, but can unexpectedly occur in a myriad of other natural objects and systems. One of these is the capacity of certain solids to emit light due to changes in the crystal structure in response to the exertion of an external force, as in mechanoluminescence (Table 1). Scratching the surface of quartz will induce luminescence to occur where the rock’s surface is disturbed [9,10]. Similarly, the crushing of such ordinary materials such as white crystalline sugar will induce flashes of luminescence [11,12], because bonds are disrupted and the energy is partially dissipated as light. That chemical bonds play important roles in the luminescent properties of certain solids is further exemplified by the fact that photons may equally be produced during crystallization processes [13,14] in which atoms take their place at certain positions in the crystal lattice and bonds are formed. This form of luminescence is aptly called crystalloluminescence. Ever since the appearance of hominids on this planet and especially Homo sapiens, the thinking ape, man has striven to control its environment. The conquest and control of fire did not only change man’s eating habits, but provided light and heat, which allowed Homo sapiens to conquer regions in which other hominids could not readily survive, and also gave him an edge in the defense against predators and in the development of a myriad of tools. In modern times, this development continued and the invention of the incandescent light bulb by Thomas Edison in 1879 to date remains one of man’s greatest achievements . This invention not only made us independent from the Sun and its illuminating rays, but also had a major impact on scientific research. What would spectroscopy be without a light source, or optical microscopy for that matter of fact? Recent decades have subsequently seen the development of high power and monochromatic light sources, such as lasers and light emitting diodes (LED). The latter will see a bright future, as countries around the world are phasing out the production of conventional incandescent light bulbs in an effort to save energy and reduce global warming (ironically this phasing out also stimulated the use of energy-saving light bulbs that contain harmful and toxic organic compounds and mercury). As stated previously, the utilization of light-matter interactions has had a major impact on scientific research. It is not only the basis for analytical techniques such as UV/vis-, atomic absorption, or infrared spectroscopy, but also for the visualization of microscopic, and more recently even nanoscopic structures via optical microscopy or nanoscopy (super-resolution microscopy). The first scientists who developed optical microscopy for the observation of biology at the microscopic level were Robert Hooke and Antonie van Leeuwenhoek. Hooke developed the compound microscope, which consisted of a stage, a light source and three optical lenses; general features that modern microscopes still contain. With this microscope, Hooke observed insects, such as lice and fleas, plant seeds and plant sections, and published both his biological observations and fundamentals of microscopy in 1665 in the book entitled “Micrographia” . It was Robert Hooke who coined the term “cell” when observing the boxlike porous structure of cork, because it reminded him of the cells of a monastery–from the Latin cellula, meaning “a small room”. However, it needs to be noted that Hooke did not observe cells as in our current biological understanding of the word. The observation of single cellular organisms was first achieved by Antonie van Leeuwenhoek in 1678 with a simple microscope containing a single, convex lens that could resolve details as small as 1 μm . His microscope was more difficult to handle than Hooke’s compound version, but with it van Leeuwenhoek observed his “animalcules”: various bacteria, protozoa and spermatozoa, and also the striped patterns in muscle fibers and blood flow in capillaries. These initial pioneering steps led to further development of optical microscopy and consequently major biological discoveries. The jump from white light microscopy to fluorescence microscopy was by comparison small, and from the beginning of the 20th century, many now prominent names were involved in its development. August Köhler constructed the first ultraviolet (UV) illumination microscope in 1904 at Zeiss Optical Works, but it was Oskar Heimstädt who developed the first rudimentary fluorescence microscope in 1911, with which he studied autofluorescence in organic and inorganic compounds . Improvements were made in 1929 by Philipp Ellinger and August Hirt and their epi-fluorescence microscope is still conceptually used in today’s laboratories. With the introduction of lasers (light amplification through stimulated emission of radiation) by Gould, Townes, Schawlow, and Maiman [19,20] in the 1960s, the lack of excitory power was overcome and this paved the way for the development of confocal microscopy. Lasers offered what other light sources could not: a high degree of spatial and temporal coherence, which means that the diffraction limited monochromatic and coherent beam can be focused in a tiny spot, achieving a very high local irradiance. Confocal laser scanning microscopy (CLSM) combines high-resolution optical imaging with depth selectivity and was originally invented by Marvin Minsky in 1957 (reference ). Advances in resolution and penetration depth were achieved by multi-photon microscopy, first theoretically described by Maria Göppert-Mayer in 1931 in her doctoral thesis , and subsequently further developed by Winfried Denk in the lab of Watt Webb . Concomitant evolution in fluorochrome development allowed fluorescence microscopy to grow beyond the classical boundaries of optical microscopy. Particularly the use and genetic engineering of fluorescent proteins that span the visibly spectrum [7,25,26,27,28], in which the fluorescent properties are controlled, allowed methods such as nanoscopy to flourish, thereby “cheating” Abbe’s diffraction limit and allowing imaging with unsurpassed resolution [30,31]. In vivo whole animal imaging was spurred on by developments in nanoparticle technology, especially quantum dots [32,33,34]. Also chemical engineering of classical organic dyes by companies such as Molecular Probes (currently Life Technologies/Invitrogen) meant that for virtually all fields of biological research, probes now became available. This not only led to an increase and evolution in imaging techniques, but also caused new developments in fluorescence spectroscopy, fluorescence multiplexing, high-throughput screening, and the development of simple and fast clinical tests; many of which now can be found in the small labs of local physicians. This special edition of Molecules brings together a number of articles dedicated to fluorescence and its application in life- and biomedical sciences. Collectively, these articles illustrate recent advances in the field and highlight a promising and bright future for life- and biomedical sciences, as well as other, related fields of technology, as concomitant evolution in fluorochrome and imaging technique development rapidly opens up novel avenues of research. In particular, the development of nanoscopy as a “real-time” imaging technique should propel cell biological and biomedical research to new discoveries and a better understanding of both normal and pathological biological processes. 2. Fluorescence Techniques and Fluorescence Microscopy 2.1. Fluorescence: “Exciting” Luminescence Luminescence has been known for ages by the term “phosphor”–from phosphorus, which means the light bearer in ancient Greek–used to designate minerals that glow in the dark after exposure to daylight. Luminescence may be defined as “spontaneous emission of radiation from an electronically or vibrationally excited species not in thermal equilibrium with its environment” . In fact, it makes relatively little difference what type of process causes absorption of a suitable energy quantum–light, radio waves, heat, ionizing radiation, mechanical force, electric current, etc. (Table 1)–and subsequent excitation to the excited state. If the material does not dissipate the excess energy via non-radiative processes, such as collision with the surrounding molecules, luminescence will and must occur. Of the various luminescent phenomena, photoluminescence in particular has had a major impact on a myriad of scientific and technological disciplines, including chemistry, biology, medicine, physics, and even materials science and nanotechnology [34,37,38]. As stated previously, photoluminescence may be divided into fluorescence and phosphorescence, which both involve the absorption of photons (and their energy), resulting in the promotion of ground state electrons (excitation) to the so-called excited state (Figure 2). This only happens in substances with suitable electron and quantum chemical energy level distributions (a susceptible substance), and the absorbed energy is subsequently dissipated, after a particular time, by reemitting light (photons) from electronically excited states. According to IUPAC rules, fluorescence may be defined as the spontaneous emission of light radiation from an excited entity with retention of spin multiplicity . Nota bene: the spin multiplicity is defined as the number of possible orientations, calculated as 2S+1, of the spin angular momentum corresponding to a given total spin quantum number (S) for the same spatial electronic wavefunction. The average time these species spend in the excited state is called the fluorescence lifetime and the photon’s energy or generally a quantum of light follows from Planck’s law : Phosphorescence is often phenomenologically described as being longer-lived than fluorescence, which disappears simultaneously with the end of the excitation. However, this is only partially correct, because there are short-lived phosphorescent species, such as zinc sulfide (violet), which have lifetimes comparable to fluorescent species. However, in phosphorescence, the excited species passes through an intermediate state via intersystem crossing (Figure 2B). Phosphorescence thus requires a change in spin multiplicity, from singlet to triplet state or vice versa (see the spin arrows in Figure 2B), whereas in fluorescence this multiplicity1 is retained. The subsequent relaxation from the meta-stable triplet state ET1 to the ground state GS0 is, because of the necessary spin reversal, forbidden and therefore commonly several orders of magnitude slower than fluorescence. For this reason, many phosphorescent species emit their light for prolonged periods of time; the most illustrative example are the phosphors used in dials and indices of wrist watches. Fluorescence as a phenomenon is a complicated physical process, with numerous alternate pathways of energy conversion and/or dissipation, or environmental influencing of the final luminescent outcome. These include, non-radiative decay processes (intersystem crossing, internal conversion, predissociation, dissociation, and external conversion), quenching, energy and charge transfer, fluorescence anisotropy, intermittency, and photobleaching, to name but a few. These phenomena directly affect the emission spectrum (form and maxima), fluorescence intensity and number of photons emitted per unit time, and fluorescence lifetime. A concise introduction on fluorescence, fluorescence phenomena and artifacts, conventional single photon, confocal, two-photon and super-resolution fluorescence microscopy is provided in reference . For more detailed information on fluorescence and its phenomena, the reader is referred to the “bible” of fluorescence: Lakowicz’s “Principles of fluorescence spectroscopy” . Equally, advanced information on fluorescence microscopy can be found in Pawley’s: “Handbook of biological confocal microscopy” . 2.2. Advanced Fluorescence Microscopic Techniques As stated in §2.1, fluorescence is influenced by numerous phenomena that may cause artifacts; a fact that should be taken into account when planning experiments. Nonetheless, the competent and well trained researcher will be able to handle such artifacts in order to prevent serious perturbations of the results and misinterpretation of data. However, what is most fascinating is the fact that clever researchers have always been able to turn a technological down-side to an advantage. Thus, what might be experienced by one researcher as a disadvantage and unwanted “artifact”, e.g., photobleaching or intensity loss via resonance energy transfer, the same feature may be cleverly used by another to solve her/his scientific question, e.g., to study diffusion of molecules via Fluorescence Recovery After Photobleaching (FRAP) or molecular interactions via Förster Resonance Energy Transfer (FRET). Recently, Helen Ishikawa-Ankerhold and I reviewed the basic concepts of these advanced fluorescence microscopy techniques, their utilization and value to cell biological research, and new developments in the field . Basically, fluorescence microscopy-based methods to determine molecular interactions, molecule movement, whether by molecular diffusion or active transport, or a combination thereof, are based on energy or charge transfer phenomena or on methods that selectively and spatially impede fluorescence; either permanently or reversibly. The gold standard for imaging interactions between biomolecules is based on the aforementioned energy transfer between fluorescently labeled or fluorescent molecules. This photophysical process occurs when the excited state energy from a donor fluorochrome is transferred via a non-radiative mechanism to a ground state acceptor chromophore via weak long-range dipole–dipole coupling. First described mathematically by Theodor Förster in the 1940s [42,43], it requires that the donor’s emission spectrum overlaps the acceptor’s absorption spectrum and that donor and acceptor are in close proximity. To determine the movement or transport of biomolecules, photobleaching-based or photoswitching-based methods are used. A wide variety of bleaching methods, including FRAP, Inverse FRAP (iFRAP), Fluorescence Loss in Photobleaching (FLIP), and Fluorescence Localization after Photobleaching (FLAP), have been used to determine the diffusion or active transport of biomolecules, the connectivity between different compartments in the cell or the mobility of a molecule within the whole compartment, and the mobility of molecules in small areas of an organelle, particularly the nucleus, and their exchange with the surrounding environment, and other applications. An enhancement or addition is provided by using fluorescent proteins that can be switched, either irreversibly “on” or “off” (photoactivation), from one color to another (photoswitching) or reversibly on/off, as in photochromic proteins (see Reference ). The advantage lies in the fact that less toxic compounds are produced (reactive oxygen species formation is always associated with photobleaching), the aforementioned proteins offer more precise localization of fluorescence, the labeling can in principle be well-controlled in a spatio-temporal manner, and fast moving sub-populations can be detected. Furthermore, by combining techniques such as FRAP and FRET, interactions between moving biomolecules can be imaged with high resolution. Such measurements would not be possible with conventional biochemical and cell biological assays. 2.3. Chemically induced Photoswitching of Fluorescent Probes for Super-resolution Microscopy Optical microscopy is generally limited in its maximal resolution by aberrations caused by the various media that the light passes through, i.e., diffraction. Ernst Abbe formulated the theoretical foundations for this limitation by diffraction in 1873: the smallest resolvable distance between two points cannot be smaller than half the wavelength of the imaging light . The Abbe diffraction limit stood firm for more than a century, until evolving knowledge of the mechanisms of fluorescence allowed researchers to “cheat” the diffraction limit by choosing circumstances in which this limit no longer was valid. For instance, stimulated emission depletion (STED ) is a technique in which an initial “broad” focal spot is shrunk in its diameter (below the diffraction limit) by depleting the outer excited state fluorochromes through stimulated emission with a doughnut-shaped STED beam that is red- and △t time-shifted (modulation of transitions between two states). This represents one category of super-resolution approaches. Alternatively, a number of techniques are based on the temporal confinement of fluorescence and the precise spatial localization of individual fluorochromes by repeated photoswitching of a limited number of fluorochromes in the total pool from which a super-resolution image can be reconstructed. These include structured illumination approaches (SIM ), Photo-Activation Localization Microscopy (PALM ), STochastical Optical Reconstruction Microscopy (STORM [47,48]) and others. Techniques such as PALM and STORM rely on the use of fluorescent probes that can be switched reversibly between a fluorescent “on” and dark “off” state or at least can be photoactivated. In STORM, originally a cyanine switch was used; a pair of orange and red-emitting carbocyanine dyes, Cy3 and Cy5, in which Cy5 can be reversibly switched between fluorescent and dark states provided that a second activator dye, Cy3, is in close proximity. A major disadvantage of STORM is that most organic probes used in STORM preclude imaging in living cells, because they require the removal of molecular oxygen or need a reducing environment, which puts the cell in a state of extreme stress. Direct STORM (dSTORM), a variation of the original technique, does not require the use of paired photoswitches, but uses conventional stand-alone carbocyanine dyes (e.g., Cy5, Alexa Fluor 647, and several dyes from the ATTO series). A major advantage is that these carbocyanine dyes can be used in living cells in combination with site-specific and targeted labeling of the biomolecule of interest. PALM on the other hand uses fluorescent proteins and thus has the advantage of genetically co-expressing the label with the protein of interest, at the required location (intracellularly and on the protein of interest) without the need to disrupt membranes and with negligible perturbation of cellular homeostasis. An extrapolation of the probes used in PALM or combination with the dSTORM approach might significantly improve super resolution live cell imaging in the presence of molecular oxygen. For this reason, Ulrike Endesfelder from Mike Heilemann’s group used the basic knowledge of the mechanism of photoswitching in organic fluorochromes (under dSTORM reducing conditions) to improve super-resolution imaging and allow its application in living cells . They investigated if fluorescent proteins, i.e., PAmCherry1 (photoactivatable), mEos2, Dendra2 and psCFP2 (all photoconvertible), and bsDronpa (photoswitchable), might be used for live cell imaging under dSTORM conditions per se, the required and optimal environmental conditions, and whether these proteins can be used in combination with organic dyes for dual-color super-resolution imaging. 2.4. Twisted Intramolecular Charge Transfer and Excimer Emission in 2,7-bis(4-Diethylaminophenyl)-fluorenone Energy transfer phenomena, such as FRET, have extensively been applied to study molecular interactions, conformational changes in molecules, as probes in reporter assays, or more recently in organic semiconductors, such as OLEDs (organic light-emitting diode). Next to FRET, in which energy is transferred between susceptible molecules, charge or electron transfer can also deplete the excited state, thereby changing the fluorochrome’s fluorescent properties. Dexter electron transfer (DET), for instance, is a process in which two molecules (intermolecular) or two parts of the same molecule (intramolecular) bilaterally exchange their electrons . Unlike FRET, DET takes place at much shorter distances. Charge transfer processes include excimer and exciplex formation , which are short-lived homodimers (excimer) or heterodimers (exciplex) of which at least one molecule is in the excited state. Such complexes occur via electrostatic attraction because of partial charge transfer between the individual entities and show red-shifted emission compared with the monomer’s emission. Twisted Intramolecular Charge Transfer (TICT) is a relatively common phenomenon in molecules that consist of an electron donor and acceptor pair linked by a single bond . In polar environments, such fluorochromes undergo fast intramolecular electron transfer from the donor to the acceptor part. This electron transfer is subsequently followed by intramolecular twisting of donor and acceptor about the single bond (Figure 3) and produces a relaxed perpendicular structure and emits dual fluorescence, i.e., from a high energy band through relaxation of the locally excited state and from a lower energy band due to emission from the TICT state. However, since there are a number of relaxation pathways, it would be highly desirable to control the emissive relaxation from the TICT state, since TICT fluorescence holds great promise in applications such as OLEDs, chemosensors, and dye-sensitized photovoltaic applications. To further pursue the objective of obtaining switchable molecules, Konishi’s group recently developed a donor-acceptor-donor dye consisting of a 2,7-disubstituted fluorenone with diethyl-aminophenyl moieties as strong electron donating groups . This novel dye can easily be switched between TICT and excimer emission via the polarity of the surrounding solvent without any ground state changes. The authors hypothesized that when excimer emission was observed, either the TICT state was not formed, or once the TICT was formed, it was converted into an excimer by a Coulombic force acting between opposite charges, such as in the Harpooning effect . The development of this new dye or other dyes like it might in future applications lead to the construction of sensors that provide information on solvent polarity in real-time or as part of quick-tests. 2.5. Fluorescence Quenching to Study Binding of Flavonoids to Bovine Serum Albumin In a recent publication, Lui et al. showed how fluorescence quenching can effectively be used in biological research to determine the binding mechanisms of phytochemicals to serum proteins. In this study, the authors investigated the interaction between five flavonoids, i.e., the polyphenols formononetin-7-O-β-D-glucoside, calycosin-7-O-β-D-glucoside, calycosin, rutin, and quercetin, and bovine serum albumin (BSA). By utilizing BSA’s intrinsic ability to fluoresce (autofluorescence), they were able to show that formation of a flavonoid-BSA complex led to quenching of BSA’s autofluorescence. Fluorescence quenching occurs, because the molecular species (quencher) that is in close proximity depletes the excited state of the fluorochrome by non-radiative mechanisms (Figure 2B), thereby reducing the quantum yield and/or the lifetime. To provide a quantitative measure for the binding affinity, fluorescence quenching constants were determined using the Stern-Volmer and Lineweaver-Burk equations. Based on these fluorescence quenching constants, the compounds ranked in the following order: quercetin > rutin > calycosin > calycosin-7-O-β-D-glucoside ≈ formononetin-7-O-β-D-glucoside. Thermodynamic evaluations demonstrated that hydrophobic interactions played a major role in the flavonoid-BSA interaction. Mechanistical studies suggested that flavonoid-BSA quenching occurred through static quenching – direct interaction of the fluorochrome and the quenching molecules, for instance by forming a non-fluorescent ground state complex. To further substantiate their findings, the authors performed FRET measurements and determined the distance r between BSA (donor) and the aforementioned flavonoids (acceptor). The values for r were 4.12 for formononetin-7-O-β-D-glucoside, 3.85 for calycosin-7-O-β-D-glucoside, 3.01 for calycosin, 5.72 for rutin, and 4.75 nm for quercetin and therefore demonstrated a close interaction between the flavonoids and BSA. A comprehensible review on Förster’s theory of non-radiative energy transfer, including references to more specialized and comprehensive overviews, was recently provided by Ishikawa-Ankerhold et al. . 2.6. Molecular Morphology of Pituitary Cells: Immunohistochemistry to Fluorescence Imaging Electron microscopy-based (EM) in situ hybridization (ISH) is an essential technique for studying a biomolecule’s intracellular distribution and its role in both normal and abnormal cellular behavior. Combination of ISH and immunohistochemistry (IHC) with EM (EM-ISH & IHC) provides sufficient ultrastructural resolution to evaluate the intracellular localization of even small biomolecules, such as mRNA. With the development of nanoparticles (§3.7), especially semi-conductor quantum dots (Qdots), it is now possible to obtain sufficient optical signal from individual biomolecules in confocal laser scanning microscopy (CLSM), albeit with less resolution than EM. Matsuno and co-workers scrutinize the developments from conventional immunohistochemistry to fluorescence imaging, with a particular focus on the intracellular localization of mRNA and the exact site of pituitary hormone synthesis on the rough endoplasmic reticulum in pituitary cells. In their paper, not only ISH, IHC, CLSM and EM techniques are discussed, but they show that both EM-ISH&IHC and ISH& IHC using Qdots and CLSM are useful for understanding the relationships between protein and mRNA simultaneously in two or three dimensions. Furthermore, they developed an experimental pituitary cell line (GH3), in which the growth hormone (GH) is linked to enhanced yellow fluorescent protein (EYFP). The GH3 cell line secretes the GH‒EYFP fusion protein upon stimulation by Ca2+ (influx or release from storage) and allows the real-time visualization of the intracellular transport and secretion of GH. This approach from conventional immunohistochemistry to fluorescence imaging allows researchers to consecutively visualize the processes of transcription, translation, transport and secretion of the anterior pituitary hormone. 3.1. Pyrene: A Probe for Protein Conformation studies Pyrene (Figure 4A) is one of the most widely spread and oldest fluorochromes used in cell biology/biophysics. Pyrene-based probes have commonly been used to study membrane fusion, lipid domain formation, lipid transport mechanisms, lipid-protein interactions with FRET, in photodynamic therapy, nucleic acid dynamics, and protein conformation and conformational changes, to name but a few. Pyrene can easily be incorporated into phospholipids, either by substitution at the sn-2 position or biosynthetically by growing cells in the presence of pyrene fatty acids [55,56]. Furthermore, proteins can be site-specifically labeled on lysines succinimidyl ester, isothiocyanate and sulfonyl chloride group reactivity or cysteines with maleimide and iodoacetamide group reactivity. It has long been known that pyrene’s fluorescent and spectral properties are highly sensitive to changes in the probe’s microenvironment. Besides utilizing the characteristics and changes in the monomer emission bands (350–400 nm; Figure 4B) for polarity measurements of the microenvironment, the formation of a broad excimer–excited state dimer of two interacting pyrene molecules–emission peak at ~460 nm can be utilized to study protein conformation, conformational changes, protein folding and unfolding, protein-protein, protein-lipid and protein-membrane interactions. In a recent overview, Bains et al. discuss the intrinsic fluorescence properties of pyrene, the mechanism of excimer formation and how to extract information from these to study protein conformation and conformational changes . With this review, the authors provide insightful information for the interested researcher. 3.2. Li+ Selective Podand-Type Fluoroionophores The sensing of ions is important in a myriad of scientific and technological disciplines, including biology, (bio)medicine, and environmental chemistry. Common strategies include molecules that have ion recognition units, such as crown ethers or other complexing structures, which upon ion binding induce changes in the absorption and/or fluorescence behavior of the attached fluorochrome. Recently, we designed a luminescent lanthanide complex-based anion sensor with electron-donating methoxy groups for concomitant monitoring of multiple anions, including fluoride, acetate and dihydrogen phosphate . Metal ion sensing is equally important, especially since they play key roles in biological and envirmonmetal systems. Nishimura et al. designed podand-type fluoroionophores in which the ion recognition unit is coupled to pyrenyl groups connected by appropriate linkers : 2,2′-bis(1-pyrenylacetyloxy)-diphenyl sulfide (3), sulfoxide (4), and sulfone (5). These were partially sensitive to alkali metal ions (Li+, Na+, K+, Rb+, Cs+) and binding induced a characteristic change in their emission spectra. Most importantly, compound (4), which contains a sulfinyl group as the non-cyclic binding site, effectively reacted to Li+ ion binding and would constitute a suitable Li+ fluorescence sensor. 3.3. Molecular Dynamics Simulations of Fluorescent Membrane Probes Probing biomembrane dynamics, structure, and membrane-based cellular physiology is commonly performed with fluorescent probes and by closely observing changes in the spectrum, fluorescence lifetime, quantum yield, and by measuring double labeled constituents to determine FRET. A variety of fluorescent probes, such as the aforementioned pyrene (§3.1), are generally used to study the biophysical behavior of biomembranes, because of their high sensitivity, versatility, and sub-nanosecond time resolution. Such probes are either inserted into the lipid bilayer or covalently attached to lipids. However, depending on the particular probe used, local or wide-spread perturbations of the biomembrane, i.e., disruption of the bilayer, dynamics of bilayer constituents and bilayer thermotropics, may occur and thus experiments with probe-based methods might be compromised. Therefore, it is essential to understand such perturbations and to develop probes that will minimally interfere with normal biomembrane properties and homeostasis. Over the past decade, molecular dynamics simulations (MDS) have been developed to analyze the location and dynamics of the inserted probe and its effect on the bilayer. Until recently, these MDS were based on simple atomistic simulations of non-polar probes in fluid disordered bilayers. However, the field has not been stagnant in its development, but rather moved towards improved and more intricate MDS methodologies that allow simulation of increasingly complex fluorochromes and extension of MDS in ordered bilayers, particularly containing cholesterol (an important regulator of membrane fluidity in mammalian cells). Consequently, Loura and Ramalho review these developments to provide easy access to new developments for a broad life science audience. They show that a dramatic increase and diversification of MDS has taken place, with reported studies in all common lamellar lipid phases (liquid disordered, liquid ordered and gel phases). Simple apolar probes such as DPH and the aforementioned pyrene (see §3.1) have been the focus of study, but recent emphasis has shifted to complex amphiphilic probes, e.g., NBD, BODIPY, rhodamine or cyanine dyes. 3.4. Fluorescent Lipids in Fusogenic Liposomes for Cell Membrane Labeling and Visualization Biomembranes represent important structures in cells. Not only do they ensure compartmentalization so that the multitude of biological and chemical processes are separated, provide a barrier against the harsh extracellular environment, only allow particular molecules to enter and leave the cell, are storage places for signaling molecules and energy, selectively transport both signals and biomolecules within the cell, but the plasma membrane is also the largest organelle in the cell. Disruption of these processes might cause disease and not surprisingly, membranes and their dynamics have been to focus of intense research. To label biomembranes, fluorescently labeled lipids are commonly used. These, however, suffer from a number of drawbacks, depending on the label and concentration used: (i) large fluorochromic groups might perturb the biomembrane or might not represent the endogenous motility; (ii) labeling of living cells is difficult, except when the membrane label is incorporated biosynthetically during cell culture [61,62]; (iii) labeling procedures might induce cellular stress and therefore perturb the experimental results, and iv) generally, the labeling efficiency is low. The latter restriction was overcome with the introduction of fusogenic liposomes. These induce membrane fusion with the plasma membrane and contain neutral and positively charged lipids. To surmount the majority of the aforementioned drawbacks, Kleusch et al. developed a method in which novel combinations of fluorescent lipid derivatives in fusogenic liposome carriers are utilized . The authors specifically used a combination of a biologically irrelevant fluorescent component that triggers membrane fusion at a concentration of 2−5 mol%, e.g., DiR, and a second, biologically active fluorescent component, e.g., sphingomyelin-BODIPY-FL. DiR (1,1'-dioctadecyl-3,3,3',3'-tetramethylindo-tricarbocyanine iodide), is a near IR fluorescent, lipophilic carbocyanine that is weakly fluorescent in water but highly fluorescent and photostable when incorporated into membranes . As the authors express it, the primary advantage of a combined fusogenic delivery system is the controlled delivery of fluorescent molecules in a broad concentration range. Furthermore, this research shows that a significantly improved fluorescent signal can be obtained, with excellent signal to noise ratios. 3.5. Oligothiophenes as Fluorescent Markers for Biological Applications Oligothiophenes are a class of organic molecules that are conveniently and flexibly produced by coupling of repeating thiopene monomers (C4H4S; five rings with a central S), most common via oxidative homocoupling or metal-catalyzed C-C-coupling, e.g., Kumada, Suzuki, or Negishi-based. Because virtually any form can be built, large conjugated π-systems can be produced with controllable electronic properties over a wide range. Besides this potential for structural variation, oligothiophenes have unique electronic, optical, and redox properties, show unique self-assembling properties on surfaces or in bulk, and the high polarizability of sulfur atoms in thiophene rings leads to stabilization of the conjugated chain and to excellent charge transport properties . Oligomers of thiophene are widely used in organic electronics, such as OLEDs, because of their semiconductor properties. In biological applications, especially for labeling DNA, oligothiophenes have gained much interest over the past decade, particularly because their fluorescent properties can be modulated by varying the number of thiophene rings and the nature of the side-chains. Capobianco et al., extensively discuss the use of oligothiophenes as fluorescent probes in biological applications . Their review addresses the derivatization of oligothiophenes with active groups, such as phosphoramidite, N-hydroxysuccinimidyl and 4-sulfotetrafluorophenyl esters, isothiocyanate and azide, in order to covalently label the biomolecule of interest, especially DNA. Furthermore, the authors describe how functionalized oligothiophene probes can be used in hybridization studies and bio-imaging. 3.6. Phthalocyanines in Biomedical Optics Phthalocyanine derivatives (PcDer) have extensively been used in various dye-based applications, since they show intense green to blue colors, depending on the functional group and the complexed metal ion. Approximately one quarter of all synthetically produced organic pigments are PcDers and thus this class of dyes is widely applied in industrial applications, such as paints, printing ink, leather, textile and paper dyeing. In biomedical science, a multitude of PcDers have been developed and are under increasing investigation as photosensitizers (PS), amongst others for photodynamic therapy, and as imaging agents in bio-imaging. PcDers are porphyrin-like PS, consisting of tetrapyrrolic nitrogen-linked aromatic macrocycles, which have high extinction coefficients around 670 and 750 nm. Their properties, such as fine-tuning of NIR absorbance, pharmacokinetics, biodistribution, solubility, and stability can be directly controlled via the axial and peripheral substituents (Figure 5). In a recent review, Norbert Lange and co-workers present a comprehensive overview of the use of PcDers in photodynamic therapy, as imaging agents, their pharmacological and therapeutic significance, and critically address some of the shortcomings and how to overcome these. 3.7. Fluorescent Nanoprobes for in vivo Imaging The convergence of nanotechnology–the construction, manipulation, and utilization of materials at nanoscale dimensions –and biotechnology into nanobiotechnology has produced a multitude of (semi)synthetic nanoparticles with entirely new possibilities for biological investigations and potentially performing medical interventions at the (sub)cellular level, i.e., nanomedicine. Nanoparticles offer significant advantages over more conventional strategies in that they display enhanced sensitivity, shorter turn-around-times, allow multiplex analysis for in vitro diagnostics and imaging with excellent signal to noise ratios, and potentially permit combination of imaging and targeted therapy (multimodal probes) . One of the prime reasons why nanobiotechnology holds so much promise is that nanoparticles are in the same size-range as biomolecules. Most importantly, the physical properties of materials are distinctly different at the nano-scale compared with the same material in bulk form, and are size- and shape dependent. In this fashion, the fluorescent properties of nanoparticles can be directly controlled by controlling their size and shape. Juliette Mérian from Isabelle Texier’s group provides an interesting and comprehensive overview of current developments in nanoparticle-based probe design and their application in in vivo imaging . The authors closely evaluate the steps necessary for translation of the current generation of probes under preclinical evaluation to routine application in a clinical setting. It is expected that in the coming decades, nanoparticles may indeed be routinely used as imaging agents in diagnostics and surgical guidance, or as controlled release vehicles of surface-bound or internal bioactive payloads and as such provide a site-specific, less stressful and patient-friendly medicine with fewer side-effects. 3.8. Fluorescence-Based Multiplex Protein Detection Using Optically Encoded Microbeads The concomitant detection of multiple signals for multiple biological parameters has long been at the top of the wish list of bioscience and biomedical researchers for development of multiplex assays and high-throughput screening (HTS), as well as physicians for utilization in fast and easy diagnostics. Optical methods, including fluorescence- and plasmonic phenomena-based detection, have the potential to achieve this goal, provided that the individual signals are well separated and minimal bleed-through in the various channels occurs. Technologies that use reduced sample volumes, allow the detection of multiple signals, and that use fast detection of these signals are highly suitable for the aforementioned purposes, especially HTS. In recent years, particularly the use of bar-coded micro-sized beads (microbeads) in bead-based suspension or liquid arrays has gained much attention for the multiplex detection of biomolecules. With their large surface area, more capture biomolecules can be immobilized on the bead’s surface compared with conventional arrays, detection is fast and the sensitivity of detection is at least equal to established methods, target molecules can be collected by using flow cytometry, the beads can be used in combination with microfluidic devices, and large-scale fabrication, easy customization and storage round up some of the advantages of this technology. In this special edition, Bong-Hyun Jun et al. review recent developments of analytical protein screening methods on microbead-based platforms, such as barcoded microbeads, and molecular beacon-, and surface-enhanced Raman scattering-based techniques . The authors conclude that this technology has come a long way, but still is far from mature. Issues that remain to be addressed include: development of a larger number of optical codes, increased speed in the readout, safety issues, cost effectiveness, increased sensitivity, and a requirement for more ergonomic equipment for use in bioapplications and a clinical setting. 3.9. Fluorescence Spectroscopic Properties of Silyl-Substituted Naphthalene Derivatives Silicon (Si) is the second most abundant element in the Earth's crust and shares group 14 of the periodic table with carbon. This tetravalent metalloid element can behave similarly to carbon and analogously may form complex molecules, e.g., silanes, silenes, organosilicon etc., albeit that Si is less reactive than carbon. Organosilicon compounds are organic molecules that contain carbon-silicon bonds, with organically bound silicon being tetravalent and tetrahedral, are distinctly environmentally friendly and have extraordinary photochemical and luminescent properties, including photoinduced electron transfer reactions and intramolecular charge transfer complex formation in aromatic disilanes. Interestingly, substitution of organic fluorescent dyes, such as anthracenes, naphthacenes, pentacenes, and pyrenes, with silicon-bearing groups, particularly silyl, silylethynyl, but also other members of the period group, e.g., germyl and stannyl, induce enhancement of the fluorescence intensity. Hajime Maeda and co-workers studied the fundamental absorption and fluorescence properties of monosilyl-group substituted naphthalene derivatives: 1-silyl-, 1,4-disilyl-, 1-silylethynyl- and 1,4-disilylethynyl-naphthalenes . Their research showed that 1-silyl- and 1,4-disilylnaphthalenes show absorption maxima at longer wavelengths with larger ε values than those of naphthalene. Furthermore, bathochromic effects and incremental increases in ε were observed for electron-donating, electron-withdrawing and silylethynyl group substituted naphthalenes, and fluorescence quantum efficiencies increase, whilst lifetimes decrease when the silyl substituents are on the naphthalene ring system. 3.10. Fluorescent Probes for Detecting the Phagocytic Phase of Apoptosis Apoptosis or programmed cell death is a regulated and orderly form of elimination of cells that have gone awry, have been invaded by pathogens, or were damaged by exogenous causes . It is distinctly different from necrosis in that no loss of plasma membrane integrity occurs. The demolition process starts with a series of perturbations of the cellular architecture that set in motion the process of cell death, condensation and fragmentation of the nucleus, globularization, membrane blebbing, detachment from the surrounding cells, and preparation for recognition and removal by phagocytes. Furthermore, unwanted immune responses are prevented. The apoptotic corpses are subsequently cooperatively removed by phagocytic cells and this phase of apoptosis ensures efficient degradation of DNA, which in turn inhibits self-immunization, inflammation, and the release of viral or tumor DNA [72,73]. During the phagocytic phase of apoptosis, DNA is degraded by a single nuclease DNase II. With the current technology, optical microcopy-based assessment and detection of phagocytizing cells, and accurate discrimination of adherent versus internalized apoptotic cells is challenging and labor-intense. Therefore, the development of fluorescent probes that are capable of detecting this phase of apoptosis is highly desirable in order to allow researchers to better understand the basic processes involved. A major step towards achieving this goal was recently made by Candace Minchew and Vladimir Didenko. These authors synthesized fluorescent probes that are the covalently-bound enzyme-DNA intermediates produced in a topoisomerase reaction with specific “starting” oligonucleotides; composition: vaccinia topoisomerase-I−hairpin-shaped oligonucleotide–probe (fluorescein isothiocyanate) . The probe selectively detects blunt-ended 5’OH DNA breaks, which are specific markers of DNase II cleavage activity. In sections and fixed cells, this methodology allows the imaging of digestion processes that occur in cellular organelles, which are responsible for the actual execution of phagocytic degradation of apoptotic cell corpses. The authors applied the probes to visualize and study the phagocytic reaction in tissue sections of normal thymus and in several human lymphomas. 3.11. Fluorescent Hyaluronan Analogs for Hyaluronan Studies Hyaluronan or hyaluronic acid (HA) is an anionic, nonsulfated, linear, high molecular weight polyglycosaminoglycan, consisting of repeating units of the disaccharide D-glucuronic acid-β(1→3)-N-acetyl-D-glucosamine-β(1→4). In vivo, HA can be found in a wide variety of tissues with varying molecular weights, ranging from 5,000 to 20,000,000 Da, e.g., in human synovial fluid 3−4 million Da, in human umbilical cord 3,140,000 Da . Hyaluronan is ubiquitously present in the extracellular matrix of all vertebrates and the capsule of group A Streptococci. About 50% of HA is found in the skin and 25% in the skeleton and its supporting structures, such as ligaments and joints (here it acts both as a lubricant and is responsible for the compressive properties of articular cartilage). HA is synthesized on the inner face of the plasma membrane instead of the Golgi, and directly extruded into the extracellular matrix. Since HA is involved in a myriad of normal and abnormal biological processes, including cell proliferation and migration, wound repair, the aforementioned functions in cartilage, maintenance of the hydration and osmotic balance of tissues because of its high water binding capacity, and plays a role in certain cancers, e.g., mesothelioma, Wilms’ tumor, prostate and breast cancer, and in bladder cancer, HA was found to be associated with tumor angiogenesis and metastasis . Furthermore, HA is widely used in cosmetics, especially skin-care products, and in cosmetic surgery as dermal filler. Therefore, studying the role of HA in both physiological and pathophysiological function is a highly relevant and attractive topic. To enable imaging of various processes and applications involving HA, suitable HA-based probes must be developed. For this purpose, Wei Wang from Shi Ke’s group developed fluorescent HA analogs based on the near-infrared heptamethine cyanine dye IR-783 for cellular and small animal imaging applications . The researchers developed two different forms of the HA analogs; one for normal imaging purposes and a modified version as a biosensitive contrast agent by labeling HA with varying molar percentages of IR-783. At low labeling ratios, the uptake and transport of hyaluronan can be directly imaged while at high labeling ratios, the fluorescent signal is quenched and fluorescence emission only occurs after HA degradation within the cell. Preliminary investigations in hairless SKH mice not only show a rapid distribution after tail vein injection and subsequent accumulation in various glandular systems and upper abdominal and thoracic organs (Figure 6), but also the feasibility of using these HA analogs in whole animal imaging. 4. Concluding Remarks Luminescent technology has undeniably been a bright light in human development and science. Especially the past few decades have seen major advancements with the discovery of fluorescent proteins, novel small animal imaging methods, super-resolution microscopy, lasers and LEDS, to name but a few. Interesting enough, there seems to be no limit to the innovation in luminescent technologies, with holographic imaging and display, white light super-resolution microscopy with nano-lenses, and a myriad of OLED applications on the horizon. The next decades will certainly be extremely exciting for those of us working at the interface of nanoscience, chemistry, medicine, and biology. The papers presented in this special edition show the intensity of the research efforts in this field of science and biomedical/biological science researchers are certainly going to benefit from these innovations. However, the results here also highlight some of the disadvantages that still remain. It is therefore essential for researchers to have a profound knowledge of the basic principles involved in photoluminescence. The myriad of high quality reviews that appear regularly will certainly aid to achieve this goal. The future is bright! As special edition editor, I would like to thank the editorial team, especially Jely He, Jessica Bai, Yuan Gao, Jerry Zhang, Xinya Huang, Elissa Ge, and Tracy Chen, for their continuous support. On behalf of myself and the editorial board, I would like to thank all contributing authors for their work and excellent manuscripts, the many reviewers for sacrificing their time for peer review and their invaluable efforts to ensure the highest scientific standards. My special thanks go to the publisher, MDPI (Basel, Switzerland), and the founding editor Shu-Kun Lin. In addition, I would gratefully like to acknowledge Phil Hart (Australia, http://philhart.com/content/bioluminescence-gippsland-lakes) for providing the astonishing images of water bioluminescence by Noctiluca scintillans and Ron Teunissen (Netherlands, http://www.fluorescent.nl/) for the illuminating pictures of Hardystonite and Willemite/Calcite. Finally, Molecules presents itself in a new format, with a refurbished web-site (http://www.mdpi.com/journal/molecules) and steadily increasing impact factor. I would like to take the opportunity to encourage authors and the wider scientific community to support Open Access Publishing (OAP) and MDPI journals such as Molecules in particular. The debate on OAP is still ongoing and it is unclear in which format OAP will be most effective, for the scientific community, public at large, and the publishers, but certainly OAP is and will be a true enrichment to all. - Eckert, R.; Reynolds, G.T. The subcellular origin of bioluminescence in Noctiluca miliaris. J. Gen. Physiol. 1967, 50, 1429–1458. [Google Scholar] - Chalfie, M. GFP: Lighting up life (Nobel Lecture). Angew. Chem. Int. Ed. Engl. 2009, 48, 5603–11. [Google Scholar] [CrossRef] - Chalfie, M.; Tu, Y.; Euskirchen, G.; Ward, W.W.; Prasher, D.C. Green fluorescent protein as a marker for gene expression. Science 1994, 263, 802–805. [Google Scholar] - Shimomura, O. Discovery of green fluorescent protein (GFP) (Nobel Lecture). Angew. Chem. Int. Ed. Engl. 2009, 48, 5590–5602. [Google Scholar] [CrossRef] - Shimomura, O.; Johnson, F.H.; Saiga, Y. Extraction, purification and properties of aequorin, a bioluminescent protein from the luminous hydromedusan, Aequorea. J. Cell. Comp. Physiol. 1962, 59, 223–239. [Google Scholar] [CrossRef] - Tsien, R.Y. The green fluorescent protein. Annu. Rev. Biochem. 1998, 67, 509–544. [Google Scholar] - Tsien, R.Y. Constructing and exploiting the fluorescent protein paintbox (Nobel Lecture). Angew. Chem. Int. Ed. Engl. 2009, 48, 5612–5626. [Google Scholar] [CrossRef] - Kubodera, T.; Koyama, Y.; Mori, K. Observations of wild hunting behaviour and bioluminescence of a large deep-sea, eight-armed squid, Taningia danae. Proc. Biol. Sci. 2007, 274, 1029–1034. [Google Scholar] [CrossRef] - Chapman, G.N.; Walton, A.J. Triboluminescence of glasses and quartz. J. Appl. Phys. 1983, 54, 5961–5965. [Google Scholar] [CrossRef] - Vettegren, V.I.; Bashkarev, A.Y.; Mamalimov, R.I.; Mamedov, R.K.; Scherbakov, I.P. Dynamics of luminescence bursts during the dry friction of quartz and PMMA. Tech. Phys. Lett. 2008, 34, 411–413. [Google Scholar] [CrossRef] - Tsuboi, Y.; Seto, T.; Kitamura, N. Laser-induced shock wave can spark triboluminescence of amorphous sugars. J. Phys. Chem. A 2008, 112, 6517–6521. [Google Scholar] [CrossRef] - Zink, J.; Hardy, G.E.; Sutton, J.E. Triboluminescence of sugars. J. Phys. Chem. 1976, 80, 248–249. [Google Scholar] [CrossRef] - Alexander, A.J. Deep ultraviolet and visible crystalloluminescence of sodium chloride. J. Chem. Phys. 2012, 136, 064512. [Google Scholar] [CrossRef] - Garten, V.A.; Head, R.B. Crystalloluminescence. Nature 1966, 209, 705. [Google Scholar] [CrossRef] - Edison, T.A. An electric lamp—Using a carbon filament or strip coiled and connected to platina contact wires. U.S. Patent 22389, 27 January 1880. [Google Scholar] - Hooke, R. Micrographia: or Some Physiological Descriptions of Minute Bodies Made by Magnifying Glasses, 1st; Martyn, J., Allestry, J., Eds.; The Royal Society: London, UK, 1665; p. 246. [Google Scholar] - van Leeuwenhoek, A. The Select Works of Antony van Leeuwenhoek: Containing His Microscopical Discoveries in Many of the Works of Nature; Arno Press, A New York Times Co.: New York, NU, USA, 1977; p. 674. [Google Scholar] - Heimstädt, O. The fluorescence microscope (Das Fluoreszenzmikroskop). Z Wiss Mikrosk 1911, 28, 330–337. [Google Scholar] - Schawlow, A.L.; Townes, C.H. Infrared and optical masers. Phys. Rev. 1958, 112, 1940–1949. [Google Scholar] [CrossRef] - Solon, L.R.; Aronson, R.; Gould, G. Physiological implications of laser beams. Science 1961, 134, 1506–1508. [Google Scholar] - Pawley, J.B. Handbook of Biological Confocal Microscopy, 3rd ed.; Springer US: New York, NY, USA, 2006. [Google Scholar] - Minsky, M. Microscopy apparatus. U.S. Patent US3013467, 1961. [Google Scholar] - Göppert-Mayer, M. Über Elementarakte mit zwei Quantensprüngen (on elementary acts with two quantum jumps). Annalen der Physik 1931, 401, 273–294. [Google Scholar] [CrossRef] - Denk, W.; Strickler, J.H.; Webb, W.W. Two-photon laser scanning fluorescence microscopy. Science 1990, 248, 73–76. [Google Scholar] - Stepanenko, O.V.; Shcherbakova, D.M.; Kuznetsova, I.M.; Turoverov, K.K.; Verkhusha, V.V. Modern fluorescent proteins: From chromophore formation to novel intracellular applications. Biotechniques 2011, 51, 313-4, 316, 318, passim. [Google Scholar] - Drobizhev, M.; Makarov, N.S.; Tillo, S.E.; Hughes, T.E.; Rebane, A. Two-photon absorption properties of fluorescent proteins. Nat. Methods 2011, 8, 393–399. [Google Scholar] [CrossRef] - Goedhart, J.; van Weeren, L.; Hink, M.A.; Vischer, N.O.; Jalink, K.; Gadella, T.W., Jr. Bright cyan fluorescent protein variants identified by fluorescence lifetime screening. Nat. Methods 2010, 7, 137–139. [Google Scholar] [CrossRef] - Zimmer, M. GFP: From jellyfish to the Nobel prize and beyond. Chem. Soc. Rev. 2009, 38, 2823–2832. [Google Scholar] [CrossRef] - Abbe, E. Beiträge zur Theorie des Mikroskops und der mikroskopischen Wahrnehmung (Contributions to the theory of the microscope and microscopical obsercation). Archiv für Mikroskopische Anatomie 1873, 9, 413–418. [Google Scholar] [CrossRef] - Huang, B. Super-resolution optical microscopy: Multiple choices. Curr. Opin. Chem. Biol. 2010, 14, 10–14. [Google Scholar] [CrossRef] - Toomre, D.; Bewersdorf, J. A new wave of cellular imaging. Annu. Rev. Cell Dev. Biol. 2010, 26, 285–314. [Google Scholar] [CrossRef] - Alivisatos, A.P.; Gu, W.W.; Larabell, C. Quantum dots as cellular probes. Annu. Rev. Biomed. Eng. 2005, 7, 55–76. [Google Scholar] [CrossRef] - Barroso, M.M. Quantum dots in cell biology. J. Histochem. Cytochem. 2011, 59, 237–251. [Google Scholar] [CrossRef] - Drummen, G.P. Quantum dots-from synthesis to applications in biomedicine and life sciences. Int. J. Mol. Sci. 2010, 11, 154–163. [Google Scholar] [CrossRef] - Johnson, I.; Spence, M.T.Z. Molecular Probes Handbook, A Guide to Fluorescent Probes and Labeling Technologies, 11th ed.; Molecular Probes: Eugene, OR, USA, 2010. [Google Scholar] - Braslavsky, S.E. Glossary of terms used in photochemistry, 3rd edition (IUPAC Recommendations 2006). Pure Appl. Chem. 2007, 79, 293–465. [Google Scholar] [CrossRef] - Lakowicz, J.R. Principles of Fluorescence Spectroscopy, 3rd ed.; Springer Science: New York, NY, USA, 2006. [Google Scholar] - Probst, J.; Dembski, S.; Milde, M.; Rupp, S. Luminescent nanoparticles and their use for in vitro and in vivo diagnostics. Expert Rev. Mol. Diagn. 2012, 12, 49–64. [Google Scholar] [CrossRef] - Planck, M. Ueber das Gesetz der Energieverteilung im Normalspectrum (On the Law of energy distribution in the normal spectrum). Annalen der Physik 1901, 309, 553–563. [Google Scholar] [CrossRef] - Stokes, G.G. On the change of refrangibility of light. Phil. Trans. R. Soc. Lond. 1852, 142, 463–562. [Google Scholar] [CrossRef] - Ishikawa-Ankerhold, H.C.; Ankerhold, R.; Drummen, G.P. Advanced Fluorescence Microscopy Techniques-FRAP, FLIP, FLAP, FRET and FLIM. Molecules 2012, 17, 4047–132. [Google Scholar] [CrossRef] - Förster, T. Energiewanderung und Fluoreszenz (Energy transfer and fluorescence). Naturwissenschaften 1946, 6, 166–175. [Google Scholar] [CrossRef] - Förster, T. Zwischenmolekulare Energiewanderung und Fluoreszenz (Inter-molecular energy transfer and fluorescence). Annalen der Physik 1948, 2, 55–75. [Google Scholar] [CrossRef] - Dyba, M.; Jakobs, S.; Hell, S.W. Immunofluorescence stimulated emission depletion microscopy. Nat. Biotechnol. 2003, 21, 1303–1304. [Google Scholar] [CrossRef] - Gustafsson, M.G. Nonlinear structured-illumination microscopy: Wide-field fluorescence imaging with theoretically unlimited resolution. Proc. Natl. Acad. Sci. USA 2005, 102, 13081–13086. [Google Scholar] [CrossRef] - Betzig, E.; Patterson, G.H.; Sougrat, R.; Lindwasser, O.W.; Olenych, S.; Bonifacino, J.S.; Davidson, M.W.; Lippincott-Schwartz, J.; Hess, H.F. Imaging intracellular fluorescent proteins at nanometer resolution. Science 2006, 313, 1642–1645. [Google Scholar] - Huang, B.; Wang, W.; Bates, M.; Zhuang, X. Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy. Science 2008, 319, 810–813. [Google Scholar] [CrossRef] - Rust, M.J.; Bates, M.; Zhuang, X. Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM). Nat. Methods 2006, 3, 793–795. [Google Scholar] - Endesfelder, U.; Malkusch, S.; Flottmann, B.; Mondry, J.; Liguzinski, P.; Verveer, P.J.; Heilemann, M. Chemically induced photoswitching of fluorescent probes--a general concept for super-resolution microscopy. Molecules 2011, 16, 3106–3118. [Google Scholar] [CrossRef] - Dexter, D.L. A Theory of Sensitized Luminescence in Solids. J. Chem. Phys. 1953, 21, 836–850. [Google Scholar] [CrossRef] - Grabowski, Z.R.; Rotkiewicz, K.; Rettig, W. Structural changes accompanying intramolecular electron transfer: Focus on twisted intramolecular charge-transfer states and structures. Chem. Rev. 2003, 103, 3899–4032. [Google Scholar] [CrossRef] - Shigeta, M.; Morita, M.; Konishi, G. Selective Formation of Twisted Intramolecular Charge Transfer and Excimer Emissions on 2,7-bis(4-Diethylaminophenyl)-fluorenone by Choice of Solvent. Molecules 2012, 17, 4452–4459. [Google Scholar] [CrossRef] - Grabowski, Z.R. Electron-transfer and the structural changes in the excited state. Pure Appl. Chem. 1992, 64, 1249–1255. [Google Scholar] [CrossRef] - Liu, E.H.; Qi, L.W.; Li, P. Structural relationship and binding mechanisms of five flavonoids with bovine serum albumin. Molecules 2010, 15, 9092–9103. [Google Scholar] [CrossRef] - Kasurinen, J.; Somerharju, P. Metabolism of pyrenyl fatty acids in baby hamster kidney fibroblasts. Effect of the acyl chain length. J. Biol. Chem. 1992, 267, 6563–6569. [Google Scholar] - Naylor, B.L.; Picardo, M.; Homan, R.; Pownall, H.J. Effects of fluorophore structure and hydrophobicity on the uptake and metabolism of fluorescent lipid analogs. Chem. Phys. Lipids 1991, 58, 111–119. [Google Scholar] [CrossRef] - Bains, G.; Patel, A.B.; Narayanaswami, V. Pyrene: A probe to study protein conformation and conformational changes. Molecules 2011, 16, 7909–7935. [Google Scholar] [CrossRef] - Zheng, Y.; Tan, C.; Drummen, G.P.; Wang, Q. A luminescent lanthanide complex-based anion sensor with electron-donating methoxy groups for monitoring multiple anions in environmental and biological processes. Spectrochim. Acta A Mol. Biomol. Spectrosc. 2012, 96, 387–394. [Google Scholar] - Nishimura, Y.; Takemura, T.; Arai, S. Li selective podand-type fluoroionophore based on a diphenyl sulfoxide derivative bearing two pyrene groups. Molecules 2011, 16, 6844–6857. [Google Scholar] [CrossRef] - Loura, L.M.; Ramalho, J.P. Recent developments in molecular dynamics simulations of fluorescent membrane probes. Molecules 2011, 16, 5437–5452. [Google Scholar] [CrossRef] - Drummen, G.P.; Op den Kamp, J.A.; Post, J.A. Validation of the peroxidative indicators, cis-parinaric acid and parinaroyl-phospholipids, in a model system and cultured cardiac myocytes. Biochim. Biophys. Acta 1999, 1436, 370–382. [Google Scholar] [CrossRef] - Steenbergen, R.H.; Drummen, G.P.; Op den Kamp, J.A.; Post, J.A. The use of cis-parinaric acid to measure lipid peroxidation in cardiomyocytes during ischemia and reperfusion. Biochim. Biophys. Acta 1997, 1330, 127–137. [Google Scholar] [CrossRef] - Kleusch, C.; Hersch, N.; Hoffmann, B.; Merkel, R.; Csiszar, A. Fluorescent lipids: Functional parts of fusogenic liposomes and tools for cell membrane labeling and visualization. Molecules 2012, 17, 1055–1073. [Google Scholar] [CrossRef] - Mishra, A.; Ma, C.Q.; Bauerle, P. Functional oligothiophenes: Molecular design for multidimensional nanoarchitectures and their applications. Chem. Rev. 2009, 109, 1141–1276. [Google Scholar] - Capobianco, M.L.; Barbarella, G.; Manetto, A. Oligothiophenes as fluorescent markers for biological applications. Molecules 2012, 17, 910–933. [Google Scholar] [CrossRef] - Löbbert, G. Phthalocyanines. In Ullmann's Encyclopedia of Industrial Chemistry; Wiley-VCH Verlag GmbH & Co. KGaA: Weinheim, Germany, 2000. [Google Scholar] - Sekkat, N.; van den Bergh, H.; Nyokong, T.; Lange, N. Like a bolt from the blue: Phthalocyanines in biomedical optics. Molecules 2012, 17, 98–144. [Google Scholar] - Hood, E. Nanotechnology: Looking as we leap. Environ. Health Perspect. 2004, 112, A740–A749. [Google Scholar] [CrossRef] - Merian, J.; Gravier, J.; Navarro, F.; Texier, I. Fluorescent nanoprobes dedicated to in vivo imaging: From preclinical validations to clinical translation. Molecules 2012, 17, 5564–5591. [Google Scholar] [CrossRef] - Jun, B.H.; Kang, H.; Lee, Y.S.; Jeong, D.H. Fluorescence-based multiplex protein detection using optically encoded microbeads. Molecules 2012, 17, 2474–2490. [Google Scholar] [CrossRef] - Maeda, H.; Maeda, T.; Mizuno, K. Absorption and fluorescence spectroscopic properties of 1- and 1,4-silyl-substituted naphthalene derivatives. Molecules 2012, 17, 5108–5125. [Google Scholar] [CrossRef] - Taylor, R.C.; Cullen, S.P.; Martin, S.J. Apoptosis: Controlled demolition at the cellular level. Nat. Rev. Mol. Cell Biol. 2008, 9, 231–241. [Google Scholar] - Samejima, K.; Earnshaw, W.C. Trashing the genome: The role of nucleases during apoptosis. Nat. Rev. Mol. Cell Biol. 2005, 6, 677–688. [Google Scholar] [CrossRef] - Minchew, C.L.; Didenko, V.V. Fluorescent probes detecting the phagocytic phase of apoptosis: Enzyme-substrate complexes of topoisomerase and DNA. Molecules 2011, 16, 4599–4614. [Google Scholar] [CrossRef] - Saari, H.; Konttinen, Y.T.; Friman, C.; Sorsa, T. Differential effects of reactive oxygen species on native synovial fluid and purified human umbilical cord hyaluronate. Inflammation 1993, 17, 403–415. [Google Scholar] [CrossRef] - Wang, W.; Cameron, A.G.; Ke, S. Developing fluorescent hyaluronan analogs for hyaluronan studies. Molecules 2012, 17, 1520–1534. [Google Scholar] [CrossRef] - Toole, B.P. Hyaluronan: From extracellular glue to pericellular cue. Nat. Rev. Cancer 2004, 4, 528–539. [Google Scholar] [CrossRef] © 2012 by the authors; licensee MDPI, Basel, Switzerland. This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).
28
How can light which has been captured in a solar cell be examined in experiments Jülich scientists have succeeded in looking directly at light propagation within a solar cell by using a trick The photovoltaics ... - Read More An odd iridescent material that's puzzled physicists for decades turns out to be an exotic state of matter that could open a new path to quantum computers and other next generation electronics Physicists at the ... - Read More Polylactic acid is a degradable plastic used mostly for packaging To meet the rising demand ETH researchers have developed an eco friendly process to make large amounts of lactic acid from glycerol a waste by ... - Read More Plastic is well known for sticking around in the environment for years without breaking down contributing significantly to litter and landfills But scientists have now discovered that bacteria from the guts of a worm known ... - Read More Citizen science boosts environmental awareness and advocacy more than previously thought and can lead to broader public support for conservation efforts according to a new study by researchers at Duke University's Nicholas School of the ... - Read More The ability to 'thread' a molecular ligand through a metal organic framework MOF to alter the pore size of the material and yet allow the MOF to retain its crystallinity and principal structural features has ... - Read More A*STAR researchers have devised a way to destabilize gold nanoclusters so that they form tiny atomic nuclei that then grow together into perfectly proportioned 12 sided dodecahedron crystals These unique polyhedra have energy rich surfaces ... - Read More In quantum optics generating entangled and spatially separated photon pairs e g for quantum cryptography is already a reality So far it has however not been possible to demonstrate an analogous generation and spatial separation ... - Read More A new report from the American Institute of Physics AIP Statistical Research Center has found that the number of Hispanic students receiving bachelor's degrees in the physical sciences and engineering has increased over the last ... - Read More Scientists at Brunel University London have confirmed that treating molten metal with ultrasound is a cleaner greener and more efficient route to produce high quality castings Molten aluminium alloys at 700 °C naturally contain a ... - Read More Taking inspiration from nature researchers have created a versatile model to predict how stalagmite like structures form in nuclear processing plants as well as how lime scale builds up in kettles It's a wonderful example ... - Read More A new technology that reveals cellular gene transcription in greater detail has been developed by Dr Daniel Kaufmann of the University of Montreal Hospital Research Centre CRCHUM and the research team he directed This new ... - Read More Rice University scientists have discovered an environmentally friendly carbon capture method that could be equally adept at drawing carbon dioxide emissions from industrial flue gases and natural gas wells The Rice lab of chemist Andrew ... - Read More A new innovative dashboard from the National Institute of Standards and Technology NIST won't help you drive your car but it will help enable reproducible research in biology In a recent paper in the journal ... - Read More An international team including scientists from DESY has caught a light sensitive biomolecule at work with an X ray laser The study demonstrates that X ray lasers can capture the fast dynamics of biomolecules in ... - Read More Human biology is a massive collection of chemical reactions from the intricate signaling network that powers our brain activity to the body's immune response to viruses and the way our eyes adjust to sunlight All ... - Read More Did Mars ever have life Does it still A meteorite from Mars has reignited the old debate An international team that includes scientists from EPFL has published a paper in the scientific journal Meteoritics and ... - Read More An efficient method to harvest low grade waste heat as electricity may be possible using reversible ammonia batteries according to Penn State engineers The use of waste heat for power production would allow additional electricity ... - Read More Tell us what you think of Chemistry 2011 -- we welcome both positive and negative comments. Have any problems using the site? Questions? Chemistry2011 is an informational resource for students, educators and the self-taught in the field of chemistry. We offer resources such as course materials, chemistry department listings, activities, events, projects and more along with current news releases. The history of the domain extends back to 2008 when it was selected to be used as the host domain for the International Year of Chemistry 2011 as designated by UNESCO and as an initiative of IUPAC that celebrated the achievements of chemistry. You can learn more about IYC2011 by clicking here. With IYC 2011 now over, the domain is currently under redevelopment by The Equipment Leasing Company Ltd. Are you interested in listing an event or sharing an activity or idea? Perhaps you are coordinating an event and are in need of additional resources? Within our site you will find a variety of activities and projects your peers have previously submitted or which have been freely shared through creative commons licenses. Here are some highlights: Featured Idea 1, Featured Idea 2. Ready to get involved? The first step is to sign up by following the link: Join Here. Also don’t forget to fill out your profile including any professional designations.
28
Nanobone Cells (image) Ohio State University Caption Cells show signs of healthy growth in this transmission electron microscope image, taken 15 hours after they were placed on a titanium surface coated with a carpet of tiny nanowires. In the inset (upper left), filaments can be seen reaching out from cells to the surface, which indicates a strong connection. Ohio State University engineers are developing the coating, which could someday help broken bones and joint replacements heal faster. Credit Image courtesy of Sheikh Akbar, Ohio State University. Usage Restrictions None Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.
28
By completing the Bound States Calculation Lab, users will be able to: a) understand the concept of bound states, b) the meaning of the eigenvalues and the eigenvectors, and c) the form of the eigenvalues and eigenvectors for rectangular, parabolic and triangular confinement. The specific objectives of the Bound States Calculation Lab are: (Image(models.jpg, 400px) failed - File not found) Users who are new to the concept of bound states and solution of the Schrodinger equation for bound states should consult the following resource: 1. D. K. Ferry, Quantum Mechanics: An Introduction for Device Physicists and Electrical Engineers, Taylor & Francis. * Bound States Calculation Description (tutorial) * Bound States Calculation Lab - Fortran Code (source code dissemination) Exercises and Homework Assignments Solutions to Exercises Solutions are provided only to instructors! This test will assess the users conceptual understanding of the physical, mathematical and computational knowledge related to quantum bound states in different confining potentials that occur in real device structures. Users are challenged to integrate what they have learned about Quantum Bound States.
28
Step-Index Fiber Simulation Bjorn Sjodin | April 23, 2013 Optical fibers are used to transmit information in the form of light through an optical waveguide made of glass fibers. The light is sent in a series of pulses that can be translated as binary code, allowing the transfer of information through the fiber. Because such pulses can travel with less attenuation and are immune to electromagnetic disturbances, fibers are used instead of traditional metallic wires thus allowing data transmission over longer distances and at higher bandwidths. What is a Step-Index Fiber? A step-index fiber is an optical fiber that exhibits a step-index profile; the refractive index remains constant throughout the core of the fiber, while there is an abrupt decrease in the refractive index at the interface between the core and the outer covering, or “cladding”. The step-index profile can be used for both single-mode and multi-mode fibers. Single-Mode vs. Multi-Mode Waveguides Single-mode fibers have smaller cores (about 10 microns in diameter) than multi-mode fibers and the difference between them lies mainly in the modal dispersion they exhibit and the number of different modes that can be transmitted. Single-mode fibers have a narrower modal dispersion, which means that they transmit each light pulse over longer distances with better fidelity than multi-mode fibers. This is why single-mode fibers, for example, exhibit a higher bandwidth than multi-mode fibers. Analyzing a Single-Mode Step-Index Fiber Below you can see a simulation of a step-index fiber where the inner core is made of pure silica glass (SiO2) and has a refractive index of 1.445. The cladding has a refractive index of 1.4378. For the simulation results shown below, a mode analysis is made on the xy-plane of the fiber, where the wave propagates in the z-direction. A lot of information can be deduced from this type of eigenvalue, or eigenmode, analysis of just a cross-section of a fiber. Due to the symmetry of the fiber cross-section, the lowest eigenmode is degenerate and light sent into the fiber will propagate at the same speed for both of these modes of propagation. What happens if we squeeze the fiber hard or if there were manufacturing errors? It may then happen that the glass becomes birefringent with different refractive indices in different directions. The lowest eigenmode will then split and not be degenerate anymore: light will propagate at a different speed for each mode resulting in dispersion of the signal. This is just a very basic example; real-world analysis scenarios can of course be much more complicated. |The surface plot and contour lines visualize the z-component of the electric field and the magnetic field, respectively.||Alternative visualization of the plot on the left where a height expression based on the electric field value has been applied to the surface plot.| Revolutionizing Communication with Fiber Optics Although the concept behind fiber optic communications has been around since the 1800’s, it wasn’t until recently that they were implemented in the modern world. Light is transmitted through the fiber using a principle called total internal reflection, which allows the light to be propagated down the waveguide with (theoretically) zero loss to the outside environment. However, since we don’t live in a theoretical world, information losses do occur. Prior to the 1970’s, optical fibers were prone to large transmission losses, making them purely an academic endeavor. However, in 1970, researchers were able to show that it was possible to manufacture low-loss optical fibers. These new waveguides demonstrated losses as low as 20 decibels per kilometer (dB/km), instead of the 2,000 dB/km that were shown in previous experiments. Thanks to years of intensive development, today’s fibers have losses that are near the theoretical limit for a given combination of materials used and geometrical characteristics. Between 1990 and 2000, the commercial optics market exploded, with cables implemented worldwide in just a few years. As Thomas Allen wrote in his National Geographic article “The Future is Calling” in 2001, “It took a hundred years to connect a billion people by wire. It has taken only ten years to connect the next billion people.” Fiber optics revolutionized communication in the 1990′s and today, and improvements in transmission efficiency and cost of production continue to bring faster and more efficient communication to the developed world. Other Fiber Optics Applications While the most widespread use of fiber optic cables is in communication technologies, there are also many other applications that have been revolutionized through the use of optical waveguides. For example, in the medical field, physicians use optical fibers to look inside a patient’s body during surgery or exploration. Fiber optics allow physicians to conduct minimally invasive surgeries using tiny incisions and endoscopes to provide light. They are also used in scientific research and manufacturing to provide light to hard-to-reach or hazardous areas. Fiber optic cables can also be used as sensors within machines or vacuum chambers, providing information about pressure, temperature, or voltage changes. What other types of applications do you think optical waveguides could revolutionize? Analysis of Optical Components Optical fibers are not the only optics component that can be analyzed. So-called photonic devices in future optoelectronic circuits pose challenges for simulation software, typically because of their elongated shapes that imply plenty of electromagnetic wave oscillations in the direction of propagation. Also, 2D simulations won’t do — you need a full 3D simulation. Each oscillation in the direction of propagation needs to be densely sampled regardless of the numerical method used in order to achieve the necessary accuracy. A soon-to-come blog post will describe some new and exciting technologies that we at COMSOL are working on to make this easier. The picture below of a directional splitter is a great example of such a challenging photonics component: - Model Download: Step-Index Fiber - Model Download: Stress-Optical Effects — with Generalized Plane Strain and in a Photonic Waveguide - User Story: COMSOL simulates processors for fiber optics communication On Solvers: Benefits and Limits of Solution Methods Cooling Flange Performance Analysis - Applications 8 - Certified Consultants 32 - Chemical 57 - COMSOL Now 144 - Conference 107 - Core Functionality 83 - Electrical 144 - Fluid 95 - Interfacing 40 - Mechanical 163 - Multipurpose 17 - Tips & Tricks 16 - Trending Topics 56 - User Perspectives 83 - Video 68
28
Researchers at North Carolina State University have created a new compound that can be integrated into silicon chips and is a dilute magnetic semiconductor – meaning that it could be used to make "spintronic" devices, which rely on magnetic force to operate, rather than electrical currents. The researchers synthesized the new compound, strontium tin oxide (Sr3SnO), as an epitaxial thin film on a silicon chip. Epitaxial means the material is a single crystal. Because Sr3SnO is a dilute magnetic semiconductor, it could be used to create transistors that operate at room temperature based on magnetic fields, rather than electrical current. "We're talking about cool transistors for use in spintronics," says Dr. Jay Narayan, John C. Fan Distinguished Professor of Materials Science and Engineering at NC State and senior author of a paper describing the work. "Spintronics" refers to technologies used in solid-state devices that take advantage of the inherent "spin" in electrons and their related magnetic momentum. "There are other materials that are dilute magnetic semiconductors, but researchers have struggled to integrate those materials on a silicon substrate, which is essential for their use in multifunctional, smart devices," Narayan says. "We were able to synthesize this material as a single crystal on a silicon chip." "This moves us closer to developing spin-based devices, or spintronics," says Dr. Justin Schwartz, co-author of the paper, Kobe Steel Distinguished Professor and Department Head of the Materials Science and Engineering Department at NC State. "And learning that this material has magnetic semiconductor properties was a happy surprise." The researchers had set out to create a material that would be a topological insulator. In topological insulators the bulk of the material serves as an electrical insulator, but the surface can act as a highly conductive material – and these properties are not easily affected or destroyed by defects in the material. In effect, that means that a topological insulator material can be a conductor and its own insulator at the same time. Two materials are known to be topological insulators – bismuth telluride and bismuth selenide. But theorists predicted that other materials may also have topological insulator properties. Sr3SnO is one of those theoretical materials, which is why the researchers synthesized it. However, while early tests are promising, the researchers are still testing the Sr3SnO to confirm whether it has all the characteristics of a topological insulator. Explore further: Soft, energy-efficient robotic wings More information: The paper, "Epitaxial integration of dilute magnetic semiconductor Sr3SnO with Si (001)," was published online Sept. 9 in Applied Physics Letters.
28
(Phys.org) —There is, so to speak, uncertainty about uncertainty – that is, over the interpretation of how Heisenberg's uncertainty principle describes the extent of disturbance to one observable when measuring another. More specifically, the confusion is between the fact that, as Heisenberg first intuited, the measurement of one observable on a quantum state necessarily disturbs another incompatible observable, and the fact that on the other hand the indeterminacy of the outcomes when either one or the other observable is measured is bounded. Recently, Dr. Cyril Branciard at The University of Queensland precisely quantified the former by showing how it is possible to approximate the joint measurement of two observables, albeit with the introduction of errors with respect to the ideal measurement of each. Moreover, the scientist characterized the disturbance of an observable induced by the approximate measurement of another one, and derived a stronger error-disturbance relation for this scenario. Dr. Branciard describes the research and challenges he encountered. "Quantum theory tells us that certain measurements are incompatible and cannot be performed jointly," Branciard tells Phys.org. For example, he illustrates, it is impossible to simultaneously measure the position and speed of a quantum particle, the spin of a particle in different directions, or the polarization of a photon in different directions. "Although such joint measurements are forbidden," Branciard continues, "one can still try to approximate them. For instance, one can approximate the joint measurement of the spin of a particle in two different directions by actually measuring the spin in a direction in between. At the price of accepting some errors; this yields partial information on the spin in both directions – and the larger the precision is on one direction, the larger the error on the other must be." While it's challenging to picture what it means to measure a property "in between position and speed," he adds, it's possible to measure something that will give partial information on both the position and speed – but again, the more precise the position is measured, the less precise the speed, and vice versa. There is therefore a tradeoff between precision achievable for each incompatible observable, or equivalently on the errors made in their approximations. What exactly is this tradeoff? How well can one approximate the joint measurement? What fundamental limits does quantum theory precisely impose? This tradeoff – between the error on one observable versus the error on the other – can be characterized by so-called error-tradeoff relations, which show that certain values of errors for each observable are forbidden. "Certain error-tradeoff relations were known already, and set bounds on the values allowed," Branciard explains. "However, it turns out that in general those bounds could not be reached, since quantum theory actually restricts the possible error values more than what the previous relations were imposing." In his paper, Branciard derives new error-tradeoff relations which are tight, in the sense that the bounds they impose can be reached when one chooses a "good enough" approximation strategy. He notes that they thus characterize the optimal tradeoff one can have between the errors on the two observables. Branciard points out that the fact that the joint measurement of incompatible observables is impossible was first realized by Heisenberg in 1927, when, in his seminal paper, he explained that the measurement of one observable necessarily disturbs the other, and suggested an error-disturbance relation to quantify that. "General uncertainty relations were soon to be derived rigorously," Branciard continues. More specifically, the uncertainty relation known as the uncertainty principle or Heisenberg principle is a mathematical inequality asserting that there is a fundamental limit to the precision with which certain pairs of physical properties of a particle known as complementary variables, such as a particle's position and momentum, can be known simultaneously. In the case of position and momentum, the more precisely the position of a particle is determined, the less precisely its momentum can be known, and vice versa. "However," Branciard notes, these "standard" uncertainty relations quantify a different aspect of Heisenberg's uncertainty principle: Instead of referring to the joint measurement of two observables on the same physical system – or to the measurement of one observable that perturbs the subsequent measurement of the other observable on the same system, as initially considered by Heisenberg – standard uncertainty relations bound the statistical indeterminacy of the measurement outcomes when either one or the other observable is measured on independent, identically prepared systems." Branciard acknowledges that there has been, and still is, confusion between those two versions of the uncertainty principle – that is, the joint measurement aspect and the statistical indeterminacy for exclusive measurements – and many physicists misunderstood the standard uncertainty relations as implying limits on the joint measurability of incompatible observables. "In fact," he points out, "it was widely believed that the standard uncertainty relation was also valid for approximate joint measurements, if one simply replaces the uncertainties with the errors for the position and momentum. However, this relation is in fact in general not valid." Surprisingly little work has been done on the joint measurement aspect of the uncertainty principle, and it has been quantified only in the last decade when Ozawa1 derived the first universally valid trade-off relations between errors and disturbance – that is, valid for all approximation strategies for the joint measurement error-tradeoff relations for joint measurements. "However," says Branciard, "these relations were not tight. My paper presents new, stronger relations that are. In order to quantify the uncertainty principle for approximate joint measurements and derive error-tradeoff relations," he adds, "one first needs to agree on a framework and on definitions for the errors in the approximation. Ozawa developed such a framework for that, on which I based my analysis." A key aspect in Branciard's research is that quantum theory describes the states of a quantum system, their evolution and measurements in geometric terms – that is, physical states are vectors in a high-dimensional, complex Hilbert space, and measurements are represented by projections onto certain orthogonal bases of this high-dimensional space. "I made the most of this geometric picture to derive my new relations," Branciard explains. "Namely, I represented ideal and approximate measurements by vectors in a similar (but real) space, and translated the errors in the approximations into distances between the vectors. The incompatibility of the two observables to be approximated gave constraints on the possible configuration of those vectors in terms of the angles between the vectors." By then looking for general constraints on real vectors in a large-dimensional space, and on how close they can be from one another when some of their angles are fixed, Branciard was able to derive his relation between the errors in the approximate joint measurement. Branciard again notes that he used the framework developed mainly by Ozawa, who proposed to quantify the errors in the approximations by the statistical deviations between the approximations and their ideal measurements. In this framework, any measurement can be used to approximate any other measurement, in that the statistical deviation defines the error. However, the advantage of Branciard's new relation over previously derived ones is that it is, as he described it above, tight. "It does not only tell that certain values are forbidden," he points out, "but also shows that the bounds they impose can be reached. In fact," he illustrates, "I could show how to saturate my new relation for any pair of observables A and B and for any quantum state, and reach all optimal error values eA and eB, whether one wants a small eA at the price of having to increase eB, or vice versa." Moreover, he continues, the fact that it is tight is relevant experimentally, if one aims at testing these kinds of relations. "Showing that a given relation is satisfied is trivial if the relation is universally valid, since any measurement should satisfy it. What is less trivial is to show experimentally that one can indeed reach the bound of a tight relation. Experimental techniques now allow one to perform measurements down to the limits imposed by quantum theory, which makes the study of error-tradeoff relations quite timely. Also," he adds, "the tightness of error-tradeoff relations may be crucial if one considers applications such as the security of quantum communications: If one uses such relations to study how quantum theory restricts the possible actions of an eavesdropper, it will not be enough to say what cannot be done using simply a valid relation, but also what can be done when quantified by a tight relation." In Branciard's framework, the error-disturbance scenario initially considered by Heisenberg can be seen as a particular case of the joint measurement scenario, in that an approximate measurement of the first observable and a subsequent measurement of the then-disturbed incompatible second observable, taken together, constitute an approximate measurement of both observables. More specifically, the second measurement is only approximated because it is performed on the system after it has been disturbed by the first measurement. "Hence, in my framework," Branciard summarizes, "any constraint on approximation errors in joint measurements also applies to the error-disturbance scenario, in which the error on the second observable is interpreted as its disturbance and error-tradeoff relations simply imply error-disturbance relations. In fact," he adds, "while the error-disturbance case is a particular case of the more general joint measurement scenario, it's actually more constrained. This is because in that scenario the approximation of the second observable is done via the actual measurement of precisely that observable after the system has been disturbed by the approximate measurement of the first observable." This restricts the possible strategies for approximating a joint measurement, and as a consequence stronger constraints can generally be derived for errors versus disturbances rather than for error tradeoffs. Branciard gives a specific example. "Suppose the second observable can produce two possible measurement results – for example, +1 or -1 – that could correspond to measuring spin or polarization in a given direction. In the error-disturbance scenario, the approximation of the 2nd observable – that is, the actual measurement of that observable on the disturbed system – is restricted to produce either the result +1 or the result -1. However, in a more general scenario of approximate joint measurements, it may give lower errors in my framework to approximate the measurement by outputting other measurement results, say 1/2 or -3. For these reasons, one can in general actually derive error-disturbance relations that are stronger than error-tradeoff relations, as shown in my paper." The uncertainty principle is one of the main tenets of quantum theory, and is a crucial feature for applications in quantum information science, such as quantum computing, quantum communications, quantum cryptography, and quantum key distribution. "Standard uncertainty relations in terms of statistical indeterminacy for exclusive measurements are already used to prove the security of quantum key distribution," Branciard points out. "In a similar spirit, it may also be possible to use the joint measurement version of the uncertainty principle to analyze the possibility for quantum information applications. This would, however, probably require the expression of error-tradeoff relations in terms of information, by quantifying the limited information gained on each observable, rather than talking about errors." Looking ahead, Branciard describes possible directions for future research. "As mentioned, in order to make the most of the joint measurement version of the uncertainty principle and be able to use it to prove, for instance, the security of quantum information applications, it would be useful to express it in terms of information-theoretic – that is, entropic – quantities. Little has been studied in this direction, which would require developing a general framework to correctly quantify the partial information gained in approximate joint measurements, and then derive entropic uncertainty relations adapted to the scenarios under consideration." Beyond its possible applications for quantum information science, Branciard adds, the study of the uncertainty principle brings new insights on the foundations of quantum theory – and for Branciard, some puzzling questions in quantum foundations include why does quantum theory impose such limits on measurements, and why does it contain so many counterintuitive features, such as quantum entanglement and non locality? "A link has recently been established between standard uncertainty relations and the nonlocality of any theory. Studying this joint measurement aspect of the uncertainty principle," Branciard concludes, "may bring new insights and give a more complete picture of quantum theory by offering to address these metaphysical questions – which have been challenging physicists and philosophers since the invention of quantum theory – from a new perspective." Explore further: Quantum computers could greatly accelerate machine learning More information: Error-tradeoff and error-disturbance relations for incompatible quantum measurements, PNAS April 23, 2013 vol. 110 no. 17 6742-6747, doi:10.1073/pnas.1219331110 1Universally valid reformulation of the Heisenberg uncertainty principle on noise and disturbance in measurement, Physical Review A 67, 042105 (2003), doi:10.1103/PhysRevA.67.042105
28
11 February 2015 Research New carbon allotrope could have interesting physical and electrical properties Other materials can be made into ultra-thin nanosheets. Jon Evans finds out whether they can generate the same buzz © Andras Kis The problem with being a champion, of course, is that you’re always being challenged by upstarts looking to usurp your position, and this is beginning to happen with graphene. Like graphene, these upstarts are two-dimensional crystals consisting of a thin layer of atoms, and while they possess many of the same properties as graphene they also boast a couple of new ones. Although they haven’t yet succeeded in pushing graphene off its perch, they’ve certainly managed to muscle their way on there as well. ‘When people were getting a bit tired with graphene, two-dimensional crystals appeared and brought a new rival into the area,’ says Novoselov. Far from being a threat, however, these new rivals could end up being the making of graphene. It was actually Novoselov and Geim who first showed that the same process they used to produce graphene, which involved carefully pulling a single layer of graphene from a lump of graphite with a piece of sticky tape, could also produce other two-dimensional crystals. In a 2005 paper,1 they reported using this technique to produce a range of two dimensional crystals from materials with a similar layered structure to graphite, including molybdenum disulfide (MoS2), niobium diselenide (NbSe2) and boron nitride (BN). It is by combining different 2D materials that researchers hope to make the most out of them © Andras Kis Unlike graphene, these compounds don’t actually consist of a single layer of atoms. Rather, they comprise a layer of transition metal atoms sandwiched between two layers of chalcogen atoms. However, the atoms in these three layers are strongly held together by covalent bonds, whereas each three-layer sheet is only linked to its neighbours by weak van der Waals bonds, allowing individual sheets to be separated from each other. Despite the different atomic structure, transition metal dichalcogenides share some of graphene’s impressive properties, driven by the fact that they are essentially all surface. Both are obviously very thin, although graphene is thinner, and both are very strong, although graphene is stronger. They also share some of the same challenges, especially in terms of finding methods to produce them at large scales. Painstakingly peeling individual two-dimensional sheets from a three-dimensional crystal, known as mechanical exfoliation, clearly does not lend itself to mass production. For transition metal dichalcogenides, better methods include chemical exfoliation, in which three-dimensional crystals are sonicated in solvents to release individual sheets, and chemical vapour deposition (CVD), which is already commonly used to produce carbon nanotubes. CVD involves passing one or more gases containing the component elements over a flat substrate, where they react together to form the two-dimensional crystal. One problem with these methods, however, is that the sheets they produce often aren’t as pristine as those produced by mechanical exfoliation, tending to contain more defects. It’s the ways in which the other two-dimensional crystals differ from graphene that offer most promise, however. Graphene is better at conducting electricity than copper, but many transition metal dichalcogenides are natural semiconductors, and boron nitride is an insulator. Furthermore, different transition metal dichalcogenides possess different semiconducting properties. So while graphene does undoubtedly possess some impressive physical properties, the broad suite of other two-dimensional crystals, including over 40 different transition metal dichalcogenides, gives scientists a lot more to play with. Membranes of MoS2 can be used to detect compounds by changes in their resonant frequency © Philip Feng, Zenghui Wang / Case Western Reserve University Scientists have recently discovered that the same thing happens with other two-dimensional crystals. In 2013, Philip Feng and his colleagues at Case Western Reserve University in Cleveland, US, showed that suspended sheets of MoS2 can actually vibrate faster than graphene, suggesting they could make even more sensitive sensors.2 ‘We’ve been actively studying MoS2 nanomechanical resonators vibrating at very high frequencies (in the VHF radio band), as high frequency devices require them to be smaller, thus offering higher speed and higher responsivity and sensitivities to external stimuli and disturbances,’ explains Feng. ‘So VHF MoS2 resonators have strong potential for detecting specific compounds in gas-phase sensing and analysis.’ Feng is also looking to enhance the sensing abilities of MoS2 still further, by combining the resonator and conductivity detection mechanisms; he is helped in this aim by the fact that MoS2 is a semiconductor. ‘By combining the attractive coupled electro-mechanical properties, we hope to develop interesting sensors with more integrated functionalities,’ he says. In other cases, it means that transition metal dichalcogenides can achieve feats that are simply not possible for graphene. For example, unlike graphene, transition metal dichalcogenides have catalytic abilities, with MoS2 being able to catalyse hydrogen evolution reactions. Studies have shown that it’s the edges of MoS2 sheets that are responsible for this catalytic activity, which is usually fairly weak. Recently, however, materials scientists from the US and Korea, led by Jiaxing Huang at Northwestern University in Illinois, have managed to increase the catalytic activity of both MoS2 and tungsten sulfide (WS2) by depositing gold nanoparticles on them.3 They do this by simply mixing chemically exfoliated sheets of MoS2 and WS2 with hexachloroauric acid in water, with the sheets reducing the acid and causing gold nanoparticles to form on their surface. Interestingly, this process works best with sheets possessing lots of defects, as the gold nanoparticles preferentially form at defect sites such as sheet edges and grain boundaries, where the three layers making up individual sheets don’t quite line up with each other. Huang and his team found that covering sheets of MoS2 and WS2 with gold nanoparticles in this way greatly enhances their ability to catalyse hydrogen evolution reactions. Huang says this increase in catalytic activity is probably due to the gold nanoparticles enhancing charge transport between different sheets, which suggests that these gold-covered sheets should be able to catalyse any electrocatalytic reactions. In addition, Huang thinks the same basic approach could be used to cover the sheets with various other nanoparticles, potentially allowing them to catalyse other types of reactions. Defects may also be responsible for another interesting property that so far has only been predicted for transition metal dichalcogenides. Scientists in the US, led by Boris Yakobson at Rice University in Houston, recently calculated that transition metal dichalcogenides should be magnetic, at least at grain boundaries.4 Transition metal dichalcogenides can have magnetic properties at grain boundaries (red = spin-oriented; green = opposite spin) © Zhuhua Zhang, Rice University The next step is to detect this magnetism experimentally, which is far from easy at these small scales. So Yakobson is now looking to collaborate with a team from Tsinghua University in Beijing, China, to obtain this experimental proof. If they do find it, this will offer further evidence that transition metal dichalcogenides are actually better positioned than graphene to transform computing. Modern computing is built on the semiconducting properties of silicon; in particular, its ability to allow current to pass under certain conditions but not under others. This allows the creation of the switch-like transistors that physically underpin the digital world’s ones and zeroes. The problem with graphene is that it isn’t a semiconductor, it’s a very efficient conductor: a switch that can’t be turned off. Graphene can be transformed into a semiconductor by chemically modifying it or by physically deforming it, but transition metal dichalcogenides such as MoS2 are natural semiconductors. Find a way to construct transistors out of them and you can shrink computer circuits down to atomic scales, a feat that would be impossible with conventional silicon-based technology. If transition metal dichalcogenides turn out to be magnetic as well, then that raises the possibility of using them to develop a whole new type of computing, in which the digital ones and zeroes are encoded in the spin states of electrons, rather than in electric charge. Known as spintronics, this switching between spin states can be achieved with an applied magnetic field and should offer much faster computer processing. Some scientists have even suggested that transition metal dichalcogenides could form the basis for yet another new type of computing, known as valleytronics. Electrons travel through crystals in waves, with these waves located in certain valleys of minimum energy. In valleytronics, the idea is to switch electrons between two different valleys as a way to encode the digital ones and zeroes. Rather handily, MoS2 possesses two such valleys and a couple of research groups recently showed that it was possible to switch electrons between these two valleys using polarised light.5 All is not lost for graphene, though, because although modern computer circuits are built on semiconductors, they also require conductors and insulators. Thus, if you want to construct an atomic-scale circuit from transition metal dichalcogenides, you’ll also need to use graphene as the conductor and maybe boron nitride as the insulator. This is exactly what Andras Kis and his colleagues at the Swiss Federal Polytechnic School in Lausanne have done. In 2011, they created a transistor consisting of a single layer of MoS2 as a semiconducting channel between two gold electrodes,6 before then replacing them with two graphene electrodes. Most recently, by placing a third layer of graphene on top of the MoS2 layer to act as charge trapping device known as a floating gate, they created a nonvolatile memory cell.7 Kis and his colleagues have already started to connect a few of these transistors and memory cells together to form simple circuits. Indeed, the real potential of two-dimensional crystals comes not from using them in isolation but in joining them together to form heterostructures. If you join lots of graphene sheets together then you get bulk graphite and if you join together lots of MoS2 sheets together then you get bulk MoS2. But if you join graphene sheets and MoS2 sheets together, then you get a material that doesn’t exist in nature and may possess some very interesting properties. Novoselov’s team has created an efficient solar cell using layers of graphene and tungsten disulfide sheets © Konstantin Novoselov By sandwiching a sheet of WS2 between two sheets of graphene, Novoselov and his team have already managed to produce a very efficient solar cell.8 The graphene sheets are chemically modified such that electrons are the major charge carrier in one of them and holes (gaps produced by missing electrons) are the major charge carrier in the other. Graphene is naturally transparent and so the two graphene sheets allow light to pass through them to the semiconducting WS2 sheet. As is the case with silicon, when light hits the WS2 sheet, it generates electron·hole pairs. The electrons and holes are immediately attracted towards different graphene sheets, as one sheet is negatively charged while the other is positive, pulling the electron·hole pairs apart and generating an electric current. In developing such heterostructures, scientists already have a lot of two-dimensional crystals to choose from, with graphene and the many different transition metal dichalcogenides, but new two-dimensional crystals continue to appear. Over the past few years, several research groups have claimed to produce atom-thick layers of silicon, termed silicene, while US scientists recently predicted that an atom-thick layer of tin – dubbed stanene – will be a topological insulator with lossless electronic conduction zones. If this continues, graphene’s perch could soon become rather crowded. Jon Evans is a science writer based in Bosham, UK 1 K S Novoselov et al, Proc. Natl. Acad. Sci. USA, 2005, 102, 10451 (DOI: 10.1073/pnas.0502848102) 2 J Lee et al, ACS Nano, 2013, 7, 6086 (DOI: 10.1021/nn4018872) 3 J Kim et al, J. Phys. Chem. Lett., 2013, 4, 1227 (DOI: 10.1021/jz400507t) 4 Z Zhang et al, ACS Nano, 2013, DOI: 10.1021/nn4052887 5 H Zeng et al, Nat. Nanotechnol., 2012, 7, 490 (DOI: 10.1038/nnano.2012.95) 6 S Bertolazzi, J Brivio and A Kis, ACS Nano, 2011, 5, 9703 (DOI: 10.1021/nn203879f) 7 S Bertolazzi, D Krasnozhon and A Kis, ACS Nano, 2013, 7, 3246 (DOI: 10.1021/nn3059136) 8 L Britnell et al, Science, 2013, 340, 1311 (DOI: 10.1126/science.1235547) 11 February 2015 Research New carbon allotrope could have interesting physical and electrical properties 23 January 2014 Research Phosphorus is the latest element to enter flatland, where it becomes a p-type semiconductor 19 March 2015 Research Discovery could herald sprays that hoover up dirt and keep solar panels clean 9 September 2013 Research Analysis finds a new endocrine disrupting chemical in bottled water
28
Latest Nanoelectronics Stories Twisting spires, concentric rings, and gracefully bending petals are a few of the new three-dimensional shapes that University of Michigan engineers can make from carbon nanotubes using a new manufacturing process. DRESDEN, Germany, Oct. Scientists at the University of Leeds have perfected a new technique that allows them to make molecular nanowires out of thin strips of ring-shaped molecules known as discotic liquid crystals (DLCs). Rice University graduate student Jun Yao's research with silicon-oxide circuits could be a game-changer in nanoelectronics. Some bacteria grow electrical hair that lets them link up in big biological circuits. Dentists and their patients will soon benefit from a tiny new high-resolution X-ray camera. While refining their novel method for making nanoscale wires, chemists at the National Institute of Standards and Technology (NIST) discovered an unexpected bonus—a new way to create nanowires that produce light similar to that from light-emitting diodes (LEDs). Silicon-based film may lead to efficient thermoelectric devices. ST. FLORIAN, Austria, Sept. Electronic biosensing technology could facilitate new era of personalized medicine. - One who brings meat to the table; hence, in some countries, the official title of the grand master or steward of the king's or a nobleman's household.
28
Strong, lightweight plastic-like composites made with highly electrically conductive sheets of carbon just one atom thick could find use in electronics and protect aircraft from lightning strikes, experts told UPI's Nano World. The graphite found in pencils is made of layers just a single carbon atom thick known as graphene. Carbon nanotubes are simply graphene that has been rolled into a cylindrical shape. Investigators worldwide are researching carbon nanotubes for use in electronics because they are capable of conducting electricity at high speed with little energy loss. However, scientists have encountered many challenges when it comes to generating nanotubes with consistent electronic properties and with integrating them into circuitry via processes suitable for mass production. Carbon nanotubes are quite expensive to make as well. Graphene appears to have many of the electronic properties that make carbon nanotubes so attractive. Ideally, researchers could just take graphite and strip it apart into graphene sheets for use in devices. Graphite, which is sold for just a few dollars a pound with about 1 million metric tons sold annually worldwide, is far less expensive than carbon nanotubes. However, making isolated graphene sheets from graphite is not easy because they like to stick together. Physical chemist and materials scientist Rod Ruoff at Northwestern University in Evanston, Ill., and his colleagues experimented with electrically insulating graphite oxide, an oxygenated form of graphite. They found a version of graphite oxide chemically modified with organic compounds, when dipped in solvents and treated with ultrasound waves, dispersed into sheets of oxygenated graphene. From there, researchers then found they could fuse these sheets with commercial polymers such as rubbers or polystyrene and strip the oxygen away to make them electrically conductive graphene. The polymers help keep the graphene from sticking together. The researchers found the electronic properties of their graphene-polystyrene hybrids compare well with the best values reported for nanotube-polymer composites. Moreover, unlike the nanotube-polymer materials, the graphene-polystyrene composites are easy to process using standard industrial processes such as injection molding or hot pressing. Ruoff and his colleagues reported their findings in the July 20 issue of the scientific journal Nature. "They have shown it's possible to produce graphene from graphite using really industrial scale processes so it can be used even for composites," said physicist Andre Geim at the University of Manchester in England. These materials could have applications in the transportation as well as the electronics industry, said researcher SonBinh Nguyen, a chemist at Northwestern. For instance, chemical engineer Nicholas Kotov at the University of Michigan at Ann Arbor said these composites might find use in aircraft fuselages, which must combine low weight, high strength and electrical conductivity. "It is quite important to have them conductive to prevent damage from lighting strikes and electromagnetic pulses. The two biggest companies in airplane production, Boeing and Airbus, consider it as one of the most important issues in future design of composite planes," Kotov explained. The graphene in the composites are basically there as wrinkly sheets. Future research can explore how properties of composites alter by flattening these sheets out, and with higher concentrations of graphene, Ruoff said. Copyright 2006 by United Press International Explore further: Engineers create structures tougher than bulletproof vests
28
New technologies for the diagnosis of cancer are rapidly changing the clinical practice of oncology. As scientists learn more about the molecular basis of cancer, the development of new tools capable of multiple, inexpensive biomarker measurements on small samples of clinical tissue will become essential to the success of genetically informed and personalized cancer therapies. Researchers at UCLA have now developed a microfluidic image cytometry (MIC) platform that can measure cell-signaling pathways in brain tumor samples at the single-cell level. The new technology combines the advantages of microfluidics and microscopy-based cell imaging. The ability to make these in vitro molecular measurements, or "fingerprints," marks a new advance in molecular diagnostics that could ultimately help physicians predict patient prognosis and guide personalized treatment. "The MIC is essentially a cancer diagnostic chip that can generate single-cell 'molecular fingerprints' for a small quantity of pathology samples, including brain tumor tissues," said Dr. Hsian-Rong Tseng, a UCLA associate professor of molecular and medical pharmacology and one of the leaders of the research. "We are exploring the use of the MIC for generating informative molecular fingerprints from rare populations of oncology samples — for example, tumor stem cells." The research, which appears in the Aug. 1 issue of the journal Cancer Research, represents the teamwork of 35 co-authors from UCLA's Jonsson Comprehensive Cancer Center with expertise in surgery, pathology, cancer biology, bioinformatics and diagnostic devices. Led by Tseng and Thomas Graeber, an assistant professor of molecular and medical pharmacology, both of whom are researchers at the Crump Institute for Molecular Imaging at the David Geffen School of Medicine at UCLA and the California NanoSystems Institute (CNSI) at UCLA, the team analyzed a panel of 19 human brain tumor biopsies to show the clinical application of the MIC platform to solid tumors. The researchers also developed new bioinformatics — computational and statistical techniques and algorithms — that allowed them to process and analyze the data gleaned from the MIC platform's single-cell measurements. "Because the measurements are at the single-cell level, computational algorithms are then used to organize and find patterns in the thousands of measurements," Graeber said. "These patterns relate to the growth signaling pathways active in the tumor that should be targeted in genetically informed or personalized anticancer therapies." "The single-cell nature of the MIC brain tumor data presented an exciting and challenging opportunity," said Dr. Nicholas Graham, a postdoctoral scholar at the CNSI who worked out the data analysis. "To make sense of the data, we had to develop some new bioinformatics approaches that would preserve the power of single-cell analysis but allow for comparison between patients." Molecular and medical pharmacology graduate researcher Michael Masterman-Smith approached the project as a translational cancer biologist. "When we incorporated patient outcome data into our analyses and found that these 'biosignatures' clustered to reveal distinctive signaling phenomena that correlated with outcome, it got truly exciting," he said. Microscale technology platforms are finding wide application in biological assays in which careful manipulation and measurement of limited sample amounts are required, and the new MIC platform is capable of making molecular measurements on small tumor samples provided by tumor resection and biopsy using as few as 1,000 to 3,000 cells, according to the researchers. "The promise and attractiveness of this approach is the small amount of tissue needed for analysis in the face of increasing numbers of prognostic and predictive markers, and the possibility of quantifying tumor genetic heterogeneity," said Dr. William Yong, a Jonsson Cancer Center physician-scientist who led the pathology aspects of the research. "However, much work remains to validate this study with larger sample sizes and with more markers." "We are excited about the possibility of using this method to investigate responses of individual tumors to potential therapeutics, as well as to enhance our knowledge about how they become resistant to therapies," said Dr. Harley Kornblum, a physician-scientist who studies brain tumor biology and is a member of both UCLA's Intellectual and Developmental Disabilities Research Center and the Johnson Cancer Center cell biology program area. "Scientific, medical and engineering disciplines each have their own approach to problem-solving" said Dr. Jing Sun, a postdoctoral scholar at CNSI and an organic chemist. "For the innovative process to yield something useful, it must be faster, better, cheaper and — of course, with microscale technologies — smaller." The researchers will next apply the new platform to larger cohorts of cancer patient samples and integrate the diagnostic approach into clinical trials of molecular therapies. CytoScale Diagnostics has signed a letter of agreement regarding the technology mentioned in this paper. This study was funded by the National Cancer Institute/ National Institutes of Health (NCI/NIH) and the National Institute of Neurological Disorders and Stroke (NINDS). The UCLA Crump Institute for Molecular Imaging is a multidisciplinary collaborative of faculty, postdoctoral scholars and graduate students engaged in cutting-edge research in the fields of molecular diagnostics, microfluidics, systems biology and nanotechnology, with the aim of developing new technologies to observe, measure and understand biology in cells, tissues and living organisms through molecular imaging. The institute's ultimate goal is to provide the technology and science that will lead to a better understanding of the transition from health to disease at the molecular level and the development of new therapies to treat disease as part of a new era in molecular medicine. UCLA's Jonsson Comprehensive Cancer Center has more than 240 researchers and clinicians engaged in disease research, prevention, detection, control, treatment and education. One of the nation's largest comprehensive cancer centers, the Jonsson Center is dedicated to promoting research and translating basic science into leading-edge clinical studies. In July 2010, the Jonsson Cancer Center was named among the top 10 cancer centers nationwide by U.S. News & World Report, a ranking it has held for 10 of the last 11 years. The California NanoSystems Institute at UCLA is an integrated research center operating jointly at UCLA and UC Santa Barbara whose mission is to foster interdisciplinary collaborations for discoveries in nanosystems and nanotechnology; train the next generation of scientists, educators and technology leaders; and facilitate partnerships with industry, fueling economic development and the social well-being of California, the United States and the world. The CNSI was established in 2000 with $100 million from the state of California and an additional $250 million in federal research grants and industry funding. At the institute, scientists in the areas of biology, chemistry, biochemistry, physics, mathematics, computational science and engineering are measuring, modifying and manipulating the building blocks of our world — atoms and molecules. These scientists benefit from an integrated laboratory culture enabling them to conduct dynamic research at the nanoscale, leading to significant breakthroughs in the areas of health, energy, the environment and information technology.
28
Engineered nanomaterials, prized for their unique semiconducting properties, are already prevalent in everyday consumer products from sunscreens, cosmetics and paints to textiles and solar batteries and economic forecasters are predicting the industry will grow into $1 trillion business in the next few years. But how safe are these materials? Because the semiconductor properties of metal-oxide nanomaterials could potentially translate into health hazards for humans, animals and the environment, it is imperative, researchers say, to develop a method for rapidly testing these materials to determine the potential hazards and take appropriate preventative action. To that end, UCLA researchers and their colleagues have developed a novel screening technology that allows large batches of these metal-oxide nanomaterials to be assessed quickly, based on their ability to trigger certain biological responses in cells as a result of their semiconductor properties. The research is published in the journal ACS Nano. Just as semiconductors can inject or extract electrons from industrial materials, semiconducting metal-oxide nanomaterials can have an electron-transfer effect when they come into contact with human cells that contain electronically active molecules, the researchers found. And while these oxidationreduction reactions are helpful in industry, when they occur in the body they have the potential to generate oxygen radicals, which are highly reactive oxygen molecules that damage cells, triggering acute inflammation in the lungs of exposed humans and animals. In a key finding, the research team predicted that metal-oxide nanomaterials and electronically active molecules in the body must have similar electron energy levels called band-gap energy in the case of the nanomaterial for this hazardous electron transfer to occur and oxidative damage to result. Based on this prediction, the researchers screened 24 metal-oxide nanoparticles to determine which were most likely to lead to toxicity under real-life exposure. Using a high-throughput screening assay (performed by robotic equipment and an automated image-capture microscope), they tested the two dozen materials on a variety of cell types in a matter of a few hours and found that six of them those that had previously met the researchers' predictive criteria for being toxic based on their band-gap energy led to oxidative damage in cells. The team then tested the nanomaterials in well-orchestrated animal studies and found that only those materials that had led to oxidative damage in cells were capable of generating inflammation in the lungs of mice, confirming the researchers' band-gap hypothesis. "The ability to make such predictions, starting with cells in a test tube, and extrapolating the results to intact animals and humans exposed to potentially hazardous metal oxides, is a huge step forward in the safety screening of nanomaterials," said senior author Dr. Andre Nel, chief of the division of nanomedicine at the David Geffen School of Medicine at UCLA and the California NanoSystems Institute at UCLA and director of the University of California Center for Environmental Implications of Nanotechnology. According to the researchers, this new safety-assessment technology has the potential to replace traditional testing, which is currently performed one material at a time in labor-intensive animal studies using a "wait-and-see" approach that doesn't reveal why the implicated nanomaterials could be hazardous. The UCLA team's predictive approach and screening technique could speed up the ability to assess large numbers of emerging new nanomaterials rather than waiting for their toxicological potential to become manifest before action is taken. "Being able to integrate metal-oxide electronic properties into a predictive and high-throughput scientific platform in this work could play an important role in advancing nanomaterial safety testing in the 21st century to a preventative strategy, rather than waiting for problems to emerge," Nel said. Another major advantage of an approach based on the assessment of nanomaterials' properties is that one can identify those properties that could potentially be redesigned to make the materials less hazardous, the researchers said. The implementation of high-throughput screening is also leading to the development of computer tools that assist in prediction-making; in the future, much of the safety assessment of nanomaterials could be carried out using computer programs that perform smart modeling and simulation procedures based on electronic properties. "We can now further refine the testing of an important class of engineered nanomaterials to the level where regulatory agencies can make use of our predictions and testing methods," said Haiyuan Zhang, a postdoctoral research scholar at the Center for Environmental Implicatioons of Nanotechnology at UCLA's CNSI and the lead author of the study. Explore further: Optically activating a cell signaling pathway using carbon nanotubes
28
||This article possibly contains original research. (September 2011)| Invisibility is the state of an object that cannot be seen. An object in this state is said to be invisible (literally, "not visible"). The term is often used in fantasy/science fiction, where objects are literally made unseeable by magical or technological means; however, its effects can also be demonstrated in the real world, particularly in physics and perceptual psychology classes. Since objects can be seen by light in the visible spectrum from a source reflecting off their surfaces and hitting the viewer's eye, the most natural form of invisibility (whether real or fictional) is an object that neither reflects nor absorbs light (that is, it allows light to pass through it). This is known as transparency, and is seen in many naturally occurring materials (although no naturally occurring material is 100% transparent). Invisibility perception depends on several optical and visual factors. For example, invisibility depends on the eyes of the observer and/or the instruments used. Thus an object can be classified as "invisible to" a person, animal, instrument, etc. In research on sensorial perception it has been shown that invisibility is perceived in cycles. Invisibility is often considered to be the supreme form of camouflage, as it does not reveal to the viewer any kind of vital signs, visual effects, or any frequencies of the electromagnetic spectrum detectable to the human eye, instead making use of radio, infrared or ultraviolet wavelengths. In illusion optics, invisibility is a special case of illusion effects: the illusion of free space. Technology can be used theoretically or practically to render real-world objects invisible: - Making use of a real-time image displayed on a wearable display, it is possible to create a see-through effect. This is known as active camouflage. - Though stealth technology is declared to be invisible to radar, all officially disclosed applications of the technology can only reduce the size and/or clarity of the signature detected by radar. - In some science fiction stories, a hypothetical "cloaking device" is used to make objects invisible. On October 19, 2006 a team effort of researchers from Britain and the US announced the development of a real cloak of invisibility, though it is only in its first stages. - In filmmaking, people, objects, or backgrounds can be made to look invisible on camera through a process known as chroma keying. - An artificially made meta material that is invisible to the microwave spectrum. Engineers and scientists have performed various kinds of research to investigate the possibility of finding ways to create real optical invisibility (cloaks) for objects. Methods are typically based on implementing the theoretical techniques of transformation optics, which have given rise to several theories of cloaking. - Currently, a practical cloaking device does not exist. A 2006 theoretical work predicts that the imperfections are minor, and metamaterials may make real-life "cloaking devices" practical. The technique is predicted to be applied to radio waves within five years, and the distortion of visible light is an eventual possibility. The theory that light waves can be acted upon the same way as radio waves is now a popular idea among scientists. The agent can be compared to a stone in a river, around which water passes, but slightly down-stream leaves no trace of the stone. Comparing light waves to the water, and whatever object that is being "cloaked" to the stone, the goal is to have light waves pass around that object, leaving no visible aspects of it, possibly not even a shadow. This is the technique depicted in the 2000 television portrayal of The Invisible Man. - Two teams of scientists worked separately to create two "Invisibility Cloaks" from 'metamaterials' engineered at the nanoscale level. They demonstrated for the first time the possibility of cloaking three-dimensional (3-D) objects with artificially engineered materials that redirect radar, light or other waves around an object. While one uses a type of fishnet of metal layers to reverse the direction of light, the other uses tiny silver wires. Xiang Zhang, of the University of California, Berkeley said: "In the case of invisibility cloaks or shields, the material would need to curve light waves completely around the object like a river flowing around a rock. An observer looking at the cloaked object would then see light from behind it, making it seem to disappear." - UC Berkeley researcher Jason Valentine's team made a material that affects light near the visible spectrum, in a region used in fibre optics: 'Instead of the fish appearing to be slightly ahead of where it is in the water, it would actually appear to be above the water's surface. It's kind of weird. For a metamaterial to produce negative refraction, it must have a structural array smaller than the wavelength of the electromagnetic radiation being used." Valentine's team created their 'fishnet' material by stacking silver and metal dielectric layers on top of each other and then punching holes through them. The other team used an oxide template and grew silver nanowires inside porous aluminum oxide at tiny distances apart, smaller than the wavelength of visible light. This material refracts visible light. - The Imperial College London research team achieved results with microwaves. An invisibility cloak layout of a copper cylinder was produced in May, 2008, by physicist Professor Sir John Pendry. Scientists working with him at Duke University in the US put the idea into practice. - Pendry, who theorized the invisibility cloak "as a joke" to illustrate the potential of metamaterials, said in an interview in August 2011 that grand, theatrical manifestations of his idea are probably overblown: "I think it’s pretty sure that any cloak that Harry Potter would recognize is not on the table. You could dream up some theory, but the very practicality of making it would be so impossible. But can you hide things from light? Yes. Can you hide things which are a few centimeters across? Yes. Is the cloak really flexible and flappy? No. Will it ever be? No. So you can do quite a lot of things, but there are limitations. There are going to be some disappointed kids around, but there might be a few people in industry who are very grateful for it." - In Turkey / 2009, Bilkent University Search Center Of Nanotechnology researches explained and published in " New Journal of Physics" that they achieved to make invisibility real in practice using nanotechnology making an object invisible with no shadows etc. next to perfect transparent scene by producing nanotechnologic material that can also be produced like a suit anyone can wear. A person can be described as invisible if others refuse to see him, or overlook him. The term was used in this manner in the title of the book Invisible Man, by Ralph Ellison, in reference to the protagonist, likely modeled after himself, being overlooked on account of his status as an African American. |This section does not cite any references or sources. (March 2013)| In fiction, people or objects can be rendered completely invisible by several means: - Magical objects such as rings, cloaks and amulets can be worn to grant the wearer permanent invisibility (or temporary invisibility until the object is taken off). - Magical potions can be consumed to grant temporary or permanent invisibility. - Magic spells can be cast on people or objects, usually giving temporary invisibility. - Some mythical creatures can make themselves invisible at will, such as in some tales in which Leprechauns or Chinese dragons can shrink so much that humans cannot see them. In some works, the power of magic creates an effective means of invisibility by distracting anyone who might notice the character. But since the character is not truly invisible, the effect could be betrayed by mirrors or other reflective surfaces. Where magical invisibility is concerned, the issue may arise of whether the clothing worn by and any items carried by the invisible being are also rendered invisible. In general they are also regarded as being invisible, but in some instances clothing remains visible and must be removed for the full invisibility effect. - Active camouflage - Cloak of invisibility - Cloaking device - Somebody Else's Problem - Moreno, Ivan; Jauregui-Sánchez, Y.; Avendaño-Alejo, Maximino (2014). "Invisibility assessment: a visual perception approach". J. Opt. Soc. Am. A 31: 2244–2248. doi:10.1364/josaa.31.002244. - Craig, Eugene A.; Lichtenstein, M. (1953). "Visibility-Invisibility Cycles as a Function of Stimulus-Orientation". The American Journal of Psychology 66 (4): 554–563. doi:10.2307/1418951. - Cloak of invisibility: Fact or fiction? - Innovation - MSNBC.com - Nachman, Adrian I. (November 1988). "Reconstructions From Boundary Measurements". Annals of Mathematics (Annals of Mathematics) 128 (3): 531–576. doi:10.2307/1971435. JSTOR 1971435. - Wolf, Emil; Tarek Habashy (May 1993). "Invisible Bodies and Uniqueness of the Inverse Scattering Problem". Journal of Modern Optics 40 (5): 785–792. Bibcode:1993JMOp...40..785W. doi:10.1080/09500349314550821. Retrieved 2006-08-01. - Pendry, J. B.; D. Schurig; D. R. Smith (June 2006). "Controlling Electromagnetic Fields". Science 312 (5781): 1780–1782. Bibcode:2006Sci...312.1780P. doi:10.1126/science.1125907. PMID 16728597. Retrieved 2006-08-01. - Leonhardt, Ulf (June 2006). "Optical Conformal Mapping". Science 312 (5781): 1777–1780. Bibcode:2006Sci...312.1777L. doi:10.1126/science.1126493. PMID 16728596. Retrieved 2006-08-01. - Cho, Adrian (2006-05-26). "High-Tech Materials Could Render Objects Invisible". Science. p. 1120. Retrieved 2006-08-01. - "Invisibility cloak a step closer as scientists bend light 'the wrong way'". Daily Mail (London). 2008-08-11. - themoneytimes.com,Scientists Turn Fiction Into Reality, Closer to Make Objects "Invisible" - mirror.co.uk, Secrets of invisibility discovered - John Pendry video: The birth and promise of metamaterials, SPIE Newsroom, 18 October 2011, doi:10.1117/2.3201110.02. - The Digital Chameleon Principle: Computing Invisibility by Rendering Transparency - Physics World special issue on invisibility science - July 2011 - Light Fantastic: Flirting With Invisibility - The New York Times - Invisibility in the real world Interesting picture of a test tube's bottom half invisible in cooking oil. - Brief piece on why visible light is visible - Straight Dope - CNN.com - Science reveals secrets of invisibility - Aug 9, 2006 - - Next to perfect Invisibility achieved using nanotechnologic material In Turkey - July 2009
28
Spintronics (a portmanteau meaning "spin transport electronics"), also known as spinelectronics or fluxtronic, is an emerging technology exploiting both the intrinsic spin of the electron and its associated magnetic moment, in addition to its fundamental electronic charge, in solid-state devices. Spintronics emerged from discoveries in the 1980s concerning spin-dependent electron transport phenomena in solid-state devices. This includes the observation of spin-polarized electron injection from a ferromagnetic metal to a normal metal by Johnson and Silsbee (1985), and the discovery of giant magnetoresistance independently by Albert Fert et al. and Peter Grünberg et al. (1988). The origins of spintronics can be traced back even further to the ferromagnet/superconductor tunneling experiments pioneered by Meservey and Tedrow, and initial experiments on magnetic tunnel junctions by Julliere in the 1970s. The use of semiconductors for spintronics can be traced back at least as far as the theoretical proposal of a spin field-effect-transistor by Datta and Das in 1990. The spin of the electron is an angular momentum intrinsic to the electron that is separate from the angular momentum due to its orbital motion. The magnitude of the projection of the electron's spin along an arbitrary axis is , implying that the electron acts as a Fermion by the spin-statistics theorem. Like orbital angular momentum, the spin has an associated magnetic moment, the magnitude of which is expressed as In a solid the spins of many electrons can act together to affect the magnetic and electronic properties of a material, for example endowing a material with a permanent magnetic moment as in a ferromagnet. In many materials, electron spins are equally present in both the up and the down state, and no transport properties are dependent on spin. A spintronic device requires generation or manipulation of a spin-polarized population of electrons, resulting in an excess of spin up or spin down electrons. The polarization of any spin dependent property X can be written as A net spin polarization can be achieved either through creating an equilibrium energy splitting between spin up and spin down such as putting a material in a large magnetic field (Zeeman effect) or the exchange energy present in a ferromagnet; or forcing the system out of equilibrium. The period of time that such a non-equilibrium population can be maintained is known as the spin lifetime, . In a diffusive conductor, a spin diffusion length can also be defined as the distance over which a non-equilibrium spin population can propagate. Spin lifetimes of conduction electrons in metals are relatively short (typically less than 1 nanosecond), and a great deal of research in the field is devoted to extending this lifetime to technologically relevant timescales. There are many mechanisms of decay for a spin polarized population, but they can be broadly classified as spin-flip scattering and spin dephasing. Spin-flip scattering is a process inside a solid that does not conserve spin, and can therefore send an incoming spin up state into an outgoing spin down state. Spin dephasing is the process wherein a population of electrons with a common spin state becomes less polarized over time due to different rates of electron spin precession. In confined structures, spin dephasing can be suppressed, leading to spin lifetimes of milliseconds in semiconductor quantum dots at low temperatures. By studying new materials and decay mechanisms, researchers hope to improve the performance of practical devices as well as study more fundamental problems in condensed matter physics. Metal-based spintronic devices The simplest method of generating a spin-polarised current in a metal is to pass the current through a ferromagnetic material. The most common applications of this effect involve giant magnetoresistance (GMR) devices. A typical GMR device consists of at least two layers of ferromagnetic materials separated by a spacer layer. When the two magnetization vectors of the ferromagnetic layers are aligned, the electrical resistance will be lower (so a higher current flows at constant voltage) than if the ferromagnetic layers are anti-aligned. This constitutes a magnetic field sensor. Two variants of GMR have been applied in devices: (1) current-in-plane (CIP), where the electric current flows parallel to the layers and (2) current-perpendicular-to-plane (CPP), where the electric current flows in a direction perpendicular to the layers. Other metals-based spintronics devices: - Tunnel magnetoresistance (TMR), where CPP transport is achieved by using quantum-mechanical tunneling of electrons through a thin insulator separating ferromagnetic layers. - Spin-transfer torque, where a current of spin-polarized electrons is used to control the magnetization direction of ferromagnetic electrodes in the device. - Spin-wave logic devices utilize the phase to carry information. Interference and spin-wave scattering are utilized to perform logic operations. Non-volatile spin-logic devices to enable scaling beyond the year 2025 are being extensively studied. Spin-transfer torque-based logic devices that use spins and magnets for information processing have been proposed and are being extensively studied at Intel. These devices are now part of the ITRS exploratory road map and have potential for inclusion in future computers. Logic-in memory applications are already in the development stage at Crocus and NEC. Motorola has developed a first-generation 256 kb magnetoresistive random-access memory (MRAM) based on a single magnetic tunnel junction and a single transistor and which has a read/write cycle of under 50 nanoseconds. (Everspin, Motorola's spin-off, has since developed a 4 Mb version). There are two second-generation MRAM techniques currently in development: thermal-assisted switching (TAS) which is being developed by Crocus Technology, and spin-transfer torque (STT) on which Crocus, Hynix, IBM, and several other companies are working. Another design in development, called racetrack memory, encodes information in the direction of magnetization between domain walls of a ferromagnetic metal wire. There are magnetic sensors using the GMR effect. In 2012, IBM scientists mapped the creation of persistent spin helices of synchronized electrons persisting for more than a nanosecond. This is a 30-fold increase from the previously observed results and is longer than the duration of a modern processor clock cycle, which opens new paths to investigate for using electron spins for information processing. Semiconductor-based spintronic devices Much recent research has focused on the study of dilute ferromagnetism in doped semiconductor materials. In recent years, Dilute magnetic oxides (DMOs) including ZnO based DMOs and TiO2-based DMOs have been the subject of numerous experimental and computational investigations. Non-oxide ferromagnetic semiconductor sources (like manganese-doped gallium arsenide GaMnAs), increase the interface resistance with a tunnel barrier, or using hot-electron injection. Spin detection in semiconductors is another challenge, met with the following techniques: - Faraday/Kerr rotation of transmitted/reflected photons - Circular polarization analysis of electroluminescence - Nonlocal spin valve (adapted from Johnson and Silsbee's work with metals) - Ballistic spin filtering Because external magnetic fields (and stray fields from magnetic contacts) can cause large Hall effects and magnetoresistance in semiconductors (which mimic spin-valve effects), the only conclusive evidence of spin transport in semiconductors is demonstration of spin precession and dephasing in a magnetic field non-collinear to the injected spin orientation. This is called the Hanle effect. Applications using spin-polarized electrical injection have shown threshold current reduction and controllable circularly polarized coherent light output. Examples include semiconductor lasers. Future applications may include a spin-based transistor having advantages over MOSFET devices such as steeper sub-threshold slope. Magnetic-tunnel transistor: The magnetic-tunnel transistor with a single base layer, by van Dijken et al. and Jiang et al., has the following terminals: - Emitter (FM1): It injects spin-polarized hot electrons into the base. - Base (FM2): Spin-dependent scattering takes place in the base. It also serves as a spin filter. - Collector (GaAs): A Schottky barrier is formed at the interface. This collector regions only collects electrons when they have enough energy to overcome the Schottky barrier, and when there are states available in the semiconductor. The magnetocurrent (MC) is given as: And the transfer ratio (TR) is MTT promises a highly spin-polarized electron source at room temperature. Ferromagnetic versus Antiferromagnetic Storage Media Recently also antiferromagnetic storage media have been studied, whereas hitherto always ferromagnetism has been used., especially since with antiferromagnetic material the bits 0 and 1 can as well be stored as with ferromagnetic material (instead of the usual definition 0 -> 'magnetisation upwards', 1 -> 'magnetisation downwards', one may define, e.g., 0 -> 'vertically-alternating spin configuration' and 1 -> 'horizontally-alternating spin configuration'.). The main advantages of using antiferromagnetic material are: - the non-sensitivity against perturbations by stray fields; - the far shorter switching times; - the lack of effect on near particles. - Spin pumping - Spin transfer - List of emerging technologies - Wolf, S. A.; Chtchelkanova, A. Y.; Treger, D. M. (2006). "Spintronics—A retrospective and perspective". IBM Journal of Research and Development 50: 101. doi:10.1147/rd.501.0101. - Physics Profile: "Stu Wolf: True D! Hollywood Story"[dead link] - Spintronics: A Spin-Based Electronics Vision for the Future. Sciencemag.org (16 November 2001). Retrieved on 21 October 2013. - Johnson, M.; Silsbee, R. H. (1985). "Interfacial charge-spin coupling: Injection and detection of spin magnetization in metals". Physical Review Letters 55 (17): 1790–1793. Bibcode:1985PhRvL..55.1790J. doi:10.1103/PhysRevLett.55.1790. PMID 10031924. - Baibich, M. N.; Broto, J. M.; Fert, A.; Nguyen Van Dau, F. N.; Petroff, F.; Etienne, P.; Creuzet, G.; Friederich, A.; Chazelas, J. (1988). "Giant Magnetoresistance of (001)Fe/(001)Cr Magnetic Superlattices". Physical Review Letters 61 (21): 2472–2475. doi:10.1103/PhysRevLett.61.2472. PMID 10039127. - Binasch, G.; Grünberg, P.; Saurenbach, F.; Zinn, W. (1989). "Enhanced magnetoresistance in layered magnetic structures with antiferromagnetic interlayer exchange". Physical Review B 39 (7): 4828. doi:10.1103/PhysRevB.39.4828. - Julliere, M. (1975). "Tunneling between ferromagnetic films". Physics Letters A 54 (3): 225–201. Bibcode:1975PhLA...54..225J. doi:10.1016/0375-9601(75)90174-7. - Datta, S. & Das, B. (1990). "Electronic analog of the electrooptic modulator". Applied Physics Letters 56 (7): 665–667. Bibcode:1990ApPhL..56..665D. doi:10.1063/1.102730. - International Technology Roadmap for Semiconductors - Behin-Aein, B.; Datta, D.; Salahuddin, S.; Datta, S. (2010). "Proposal for an all-spin logic device with built-in memory". Nature Nanotechnology 5 (4): 266–270. doi:10.1038/nnano.2010.31. PMID 20190748. - Manipatruni, Sasikanth; Nikonov, Dmitri E. and Young, Ian A. (2011) [1112.2746] Circuit Theory for SPICE of Spintronic Integrated Circuits. Arxiv.org. Retrieved on 21 October 2013. - Crocus Partners With Starchip To Develop System-On-Chip Solutions Based on Magnetic-Logic-Unit™ (MLU) Technology. crocus-technology.com. 8 December 2011 - Groundbreaking New Technology for Improving the Reliability of Spintronics Logic Integrated Circuits. Nec.com. 11 June 2012. - Spintronics. Sigma-Aldrich. Retrieved on 21 October 2013. - Everspin. Everspin. Retrieved on 21 October 2013. - Hoberman, Barry. The Emergence of Practical MRAM. crocustechnology.com - LaPedus, Mark (18 June 2009) Tower invests in Crocus, tips MRAM foundry deal. eetimes.com - Walser, M.; Reichl, C.; Wegscheider, W. & Salis, G. (2012). "Direct mapping of the formation of a persistent spin helix". Nature Physics 8 (10): 757. Bibcode:2012NatPh...8..757W. doi:10.1038/nphys2383. - Assadi, M.H.N; Hanaor, D.A.H (2013). "Theoretical study on copper's energetics and magnetism in TiO2 polymorphs". Journal of Applied Physics 113 (23): 233913. arXiv:1304.1854. Bibcode:2013JAP...113w3913A. doi:10.1063/1.4811539. - Ogale, S.B (2010). "Dilute doping, defects, and ferromagnetism in metal oxide systems". Advanced Materials 22 (29): 3125–3155. doi:10.1002/adma.200903891. PMID 20535732. - Jonker, B.; Park, Y.; Bennett, B.; Cheong, H.; Kioseoglou, G.; Petrou, A. (2000). "Robust electrical spin injection into a semiconductor heterostructure". Physical Review B 62 (12): 8180. Bibcode:2000PhRvB..62.8180J. doi:10.1103/PhysRevB.62.8180. - Hanbicki, A. T.; Jonker, B. T.; Itskos, G.; Kioseoglou, G.; Petrou, A. (2002). "Efficient electrical spin injection from a magnetic metal/tunnel barrier contact into a semiconductor". Applied Physics Letters 80 (7): 1240. arXiv:cond-mat/0110059. Bibcode:2002ApPhL..80.1240H. doi:10.1063/1.1449530. - Jiang, X.; Wang, R.; Van Dijken, S.; Shelby, R.; MacFarlane, R.; Solomon, G.; Harris, J.; Parkin, S. (2003). "Optical Detection of Hot-Electron Spin Injection into GaAs from a Magnetic Tunnel Transistor Source". Physical Review Letters 90 (25). Bibcode:2003PhRvL..90y6603J. doi:10.1103/PhysRevLett.90.256603. - Kikkawa, J.; Awschalom, D. (1998). "Resonant Spin Amplification in n-Type GaAs". Physical Review Letters 80 (19): 4313. Bibcode:1998PhRvL..80.4313K. doi:10.1103/PhysRevLett.80.4313. - Jonker, Berend T. Polarized optical emission due to decay or recombination of spin-polarized injected carriers – US Patent 5874749. Issued on 23 February 1999. - Lou, X.; Adelmann, C.; Crooker, S. A.; Garlid, E. S.; Zhang, J.; Reddy, K. S. M.; Flexner, S. D.; Palmstrøm, C. J.; Crowell, P. A. (2007). "Electrical detection of spin transport in lateral ferromagnet–semiconductor devices". Nature Physics 3 (3): 197. Bibcode:2007NatPh...3..197L. doi:10.1038/nphys543. - Appelbaum, I.; Huang, B.; Monsma, D. J. (2007). "Electronic measurement and control of spin transport in silicon". Nature 447 (7142): 295–298. doi:10.1038/nature05803. PMID 17507978. - Žutić, I.; Fabian, J. (2007). "Spintronics: Silicon twists". Nature 447 (7142): 268–269. doi:10.1038/447269a. PMID 17507969. - Holub, M.; Shin, J.; Saha, D.; Bhattacharya, P. (2007). "Electrical Spin Injection and Threshold Reduction in a Semiconductor Laser". Physical Review Letters 98 (14). Bibcode:2007PhRvL..98n6603H. doi:10.1103/PhysRevLett.98.146603. - Van Dijken, S.; Jiang, X.; Parkin, S. S. P. (2002). "Room temperature operation of a high output current magnetic tunnel transistor". Applied Physics Letters 80 (18): 3364. doi:10.1063/1.1474610. - See, e.g.: Jungwirth, T., announcement of a colloqium talk at the physics faculty of a bavarian university, 28 April 2014: Relativistic Approaches to Spintronics with Antiferromagnets. - This corresponds mathematically to the transition from the rotation group SO(3) to its relativistic covering, the "double group" SU(2) - "Introduction to Spintronics". Marc Cahay, Supriyo Bandyopadhyay, CRC Press, ISBN 0-8493-3133-1 - J. A. Gupta; R. Knobel; N. Samarth; D. D. Awschalom (29 June 2001). "Ultrafast Manipulation of Electron Spin Coherence". Science 292 (5526): 2458–2461. Bibcode:2001Sci...292.2458G. doi:10.1126/science.1061169. PMID 11431559. - Wolf, S. A.; Awschalom, DD; Buhrman, RA; Daughton, JM; von Molnár, S; Roukes, ML; Chtchelkanova, AY; Treger, DM (16 November 2001). "Spintronics: A Spin-Based Electronics Vision for the Future". Science 294 (5546): 1488–1495. Bibcode:2001Sci...294.1488W. doi:10.1126/science.1065389. PMID 11711666. - Sharma, P. (28 January 2005). "How to Create a Spin Current". Science 307 (5709): 531–533. doi:10.1126/science.1099388. PMID 15681374. - "Electron Manipulation and Spin Current". D. Grinevich. 3rd Edition, 2003.* - Žutić, I.; Das Sarma, S. (2004). "Spintronics: Fundamentals and applications". Reviews of Modern Physics 76 (2): 323. arXiv:cond-mat/0405528. Bibcode:2004RvMP...76..323Z. doi:10.1103/RevModPhys.76.323. - Parkin, Stuart; Ching-Ray, Chang; Chantrell, Roy, eds. (2011). "SPIN". World Scientific. ISSN 2010-3247. - "Spintronics Steps Forward.", University of South Florida News - Bader, S. D.; Parkin, S. S. P. (2010). "Spintronics". Annual Review of Condensed Matter Physics 1: 71. doi:10.1146/annurev-conmatphys-070909-104123. - Mukesh D. Patil; Jitendra S. Pingale; Umar I. Masumdar (2013). "Overview of Spintronics". ESRSA Publications. ISSN 2278-0181. - Jitendra S. Pingale; Mukesh D. Patil; Umar I. Masumdar (2013). "Utilization of Spintronics". ISSN 2250-3153. - 23 milestones in the history of spin compiled by Nature - "Spintronics". Scientific American. June 2002. - Spintronics portal with news and resources - RaceTrack:InformationWeek (April 11, 2008) - Spintronics research targets GaAs. - Spintronics Tutorial - Lecture on Spin transport by S. Datta (from Datta Das transistor) -Part 1 and Part 2 - "Overview of Spintronics". IJERT. June 2013. - "Utilization of Spintronics". IJSRP. June 2013.
28
June 30, 2009 Stirred, not shaken: Bio-inspired cilia mix medical reagents at small scales The equipment used for biomedical research is shrinking, but the physical properties of the fluids under investigation are not changing. This creates a problem: the reservoirs that hold the liquid are now so small that forces between molecules on the liquid’s surface dominate, and one can no longer shake the container to mix two fluids. Instead, researchers must bide their time and wait for diffusion to occur. Scientists at the University of Washington hope to speed up biomedical reactions by filling each well with tiny beating rods that mimic cilia, the hairlike appendages that line organs such as the human windpipe, where they sweep out dirt and mucus from the lungs. The researchers created a prototype that mixes tiny volumes of fluid or creates a current to move a particle, according to research published in the journal Lab on a Chip. They used a novel underwater manufacturing technique to overcome obstacles faced by other teams that have attempted to build a similar device. Diffusion, or random mixing of molecules, is slow but often the only option for mixing the small volumes that are increasingly common in modern biomedical research. A plate that once held 96 wells now can have 384 or 1,536 wells, each of which tests reactions on different combinations of liquids. The volume of liquid in each well of the 384-well plate is just 50 microliters, about the volume of a single drop of water. “In order to mix water with juice, you can shake it, because the mass is very big,” said Jae-Hyun Chung, a UW assistant professor of mechanical engineering and corresponding author of the paper. “(For the wells used in biomedical assays) you can’t shake the well to mix two fluids because the mass of liquid in each well is very small, and the viscosity is very high.” The problem of mixing at small scales has confronted biomedical researchers for about 40 years, Chung said. Other strategies for mixing — shakers, magnetic sticks, ultrasonic systems, vortex machines — have not worked in biomedical research for various reasons, including the shear stress, the need to have a clear view of each well, and damage to the enzymes and biological molecules. In the past decade, various research groups have tried to develop structures that mimic cilia, which do the small-scale moving and shaking inside the human body. The problem is that each cilium finger must be very flexible in order to vibrate — so delicate, in fact, that manufactured cilia of this size collapse as they are placed in water. The UW team solved the problem by manufacturing the cilia underwater, Chung said. The resulting prototype is a flexible rubber structure with fingers 400 micrometers long (about 1/100 of an inch) that can move liquids or biological components such as cells at the microscopic scale. The team varied the length and spacing of the fingers to get different vibration frequencies. When they now apply a small vibration to the surrounding water, the fingers on the UW prototype move back and forth at 10 to 100 beats per second, roughly the vibration frequency of biological cilia. The results show the device can mix two fluids many times faster than diffusion alone and can generate a current to move small particles in a desired direction. A current could be used, for example, to move cells through a small-scale diagnostic test. Co-authors are UW mechanical engineering doctoral student Kieseok Oh and mechanical engineering professors Santosh Devasia and James Riley. The research is funded by the National Science Foundation. The team has obtained a provisional patent on the technology, and has funding from the UW’s Royalty Research Fund to build a prototype 384-well plate lined with cilia. “We are currently trying to develop the technology for high-throughput biochemical applications,” Chung said. “But we can also do micro-mixing and micro-pumps, which have many potential applications.” For more information, contact Chung at 206-543-4355 or [email protected].
28
(PhysOrg.com) -- Dawid Zalewski of the University of Twente, Netherlands, has developed a mini-laboratory on a chip that can purify biological mixtures continuously. This is very different from the usual method that can only process small quantities at a time. In fifteen minutes, the PhD student’s chip processes no less than 25,000 times as much liquid as a ‘normal chip’ in a single cycle. Zalewski was awarded his doctorate on 24 October at the faculty of Science and Technology. Lab-on-a-chip technology, which involves complete chemical laboratories the size of a chip, is on the rise. Many of these mini-laboratories are able to separate mixtures - of biological substances, for instance. This usually occurs with the aid of capillary electrophoresis; that is, a mixture is led through a thin tube over which a high voltage is applied. The voltage causes the components in the mixture to move through the tube. The size, shape and charge of the molecules affect the speed with which they move. The components that move the fastest are the first to reach the end of the tube and can be collected there - separately from the other molecules. Dawid Zalewski has developed a new form of capillary electrophoresis that can separate substances continuously: synchronized continuous-flow zone electrophoresis. In a quarter of an hour this method can process around five microlitre of liquid. This does not sound like very much, but a regular capillary electrophoresis chip can only process a couple of hundred picolitre of liquid in a cycle. This tiny quantity is not a problem if, for example, you only want to show whether a certain substance is present in a mixture. But if you want to process the pure substance further, this is a fundamental limitation. Zalewski’s chip is not limited in this way and can process 25,000 times as much liquid as a normal chip in a single cycle, in a quarter of an hour. No mechanical components The point of departure in the method developed by Zalewski was that the separation would only take place electrokinetically and that there would be no mechanical components, such as tiny pumps, on the chip. After all, mechanical components break more quickly and, furthermore, pumps are difficult, and therefore expensive, to produce at this scale. Zalewski’s method uses an additional difference in voltage, perpendicular to the existing electrical field. As a result, the substances are not only separated in the horizontal direction, but also in the vertical direction. Since the additional difference in voltage is not constant but changes in time, the pure substances come out in a wavelike movement. The collector, the part of the chip that collects the pure substance, moves up and down with this wave movement. Incidentally, the PhD student has already made further modifications to his chip. The improved version has a second collector so that the chip can separate two different pure substances simultaneously. Provided by Universiteit Twente Explore further: Researchers use neutron scattering and supercomputing to study shape of a protein involved in cancer
28
Home > News > How super-cows and nanotechnology will make ice cream healthy August 21st, 2005 How super-cows and nanotechnology will make ice cream healthy In a field somewhere in County Down, Northern Ireland, is a herd of 40 super-cows that could take all the poisonous guilt out of bingeing on ice cream. Unilever, the manufacturer of Persil and PG Tips, is sponsoring a secret research project by a leading British agricultural science institution into how to reduce the levels of saturated fat in cow's milk. It is also experimenting with nanotechnology, or the science of invisibly tiny things. Unilever believes that by halving the size of particles that make up the emulsion - or fatty oil - that it uses to make ice cream, it could use 90 per cent less of the emulsion. (Ed.'s note: just shrinking the particle size into the nano-realm does not make it nanotechnology, even by today's materials science standards. For that, the particles would have to exhibit new properties.) Rutgers, NIST physicists report technology with potential for sub-micron optical switches March 31st, 2015 Prototype 'nanoneedles' generate new blood vessels in mice: Scientists have developed tiny 'nanoneedles' that have successfully prompted parts of the body to generate new blood vessels, in a trial in mice March 31st, 2015 Super sensitive measurement of magnetic fields March 31st, 2015 Nanomedicine pioneer Mauro Ferrari at ETH Zurich March 31st, 2015 Click! That's how modern chemistry bonds nanoparticles to a substrate March 19th, 2015 EU Funded PCATDES Project has completed its half-period with success March 19th, 2015 Turmeric Extract Applied in Production of Antibacterial Nanodrugs March 12th, 2015 Simple, Cost-Efficient Method Used to Determine Toxicants Growing in Pistachio February 26th, 2015
28
Researchers at MIT and Stanford University have developed a new kind of solar cell that combines two different layers of sunlight absorbing material in order to harvest a broader range of the sun's energy The ... - Read More As nanotechnology makes possible a world of machines too tiny to see researchers are finding ways to combine living organisms with nonliving machinery to solve a variety of problems Like other first generation bio robots ... - Read More Researchers from the Malaspina Expedition have made strides in the understanding of the mechanisms governing the persistence of dissolved organic carbon DOC for hundreds or thousands of years in the deep ocean Most of this ... - Read More Brazil's 'arc of deforestation' accounted for 85% of all Amazon deforestation from 1996 to 2005. Although deforestation rates have dropped considerably since 2005 the forests of the southeastern Amazon remain vulnerable to expanding development which ... - Read More Poop could be a goldmine literally Surprisingly treated solid waste contains gold silver and other metals as well as rare elements such as palladium and vanadium that are used in electronics and alloys Now researchers ... - Read More Like vast international trading companies biological systems pick up freight items in the form of small molecules transport them from place to place and release them at their proper destination These ubiquitous processes are critical ... - Read More Scientists have developed a new simple way to cook rice that could cut the number of calories absorbed by the body by more than half potentially reducing obesity rates which is especially important in countries ... - Read More n a recent study Spatiotemporal isolation of attosecond pulses in the soft X ray water window published in Nature Communications by the Attoscience and Ultrafast Optics Group led by ICREA Professor at ICFO Jens Biegert ... - Read More Construction crews may someday use a plant molecule called lignin in their asphalt and sealant mixtures to help roads and roofs hold up better under various weather conditions It also could make them more environmentally ... - Read More offering a potential weight loss strategy for humans The team will describe their approach in one of nearly 11 000 presentations at the 249th National Meeting & amp Exposition of the American Chemical Society ACS ... - Read More Chlorine a disinfectant commonly used in most wastewater treatment plants may be failing to completely eliminate pharmaceuticals from wastes As a result trace levels of these substances get discharged from the plants to the nation’s ... - Read More and consume “You might be surprised to learn this but you’re eating lignin every day if you’re eating vegetables ” he points out The researchers acknowledge funding from ICOPAL B V Van Gelder B V ... - Read More but they no longer use the ozone depleting gases called CFCs They may however contain additional chemicals though the exact constituents can vary “Outside in a landfill potentially harmful substances in the peanuts such as ... - Read More Nanoparticles of various types can be quickly and permanently bonded to a solid substrate if one of the most effective methods of synthesis click chemistry is used for this purpose The novel method has been ... - Read More A theoretical and experimental study could lead to improved catalysts for producing hydrogen fuel from waste biomass Experimental analysis and computer simulations reveal new insights into the process by which ethanol produced from waste biomass ... - Read More An oxide carbon composite outperforms expensive platinum composites in oxygen chemical reactions for green energy devices Electrochemical devices are crucial to a green energy revolution in which clean alternatives replace carbon based fuels This revolution ... - Read More A Carnegie led team was able to discover five new forms of silica under extreme pressures at room temperature Their findings are published by Nature Communications Silicon dioxide commonly called silica is one of the ... - Read More The patterns of plant species diversity along Swedish boreal streams are closely linked to flow of surface and sub surface water The linkages between vegetation and hydrology are tight and according to Lenka Kuglerová they ... - Read More To combat global climate change caused by greenhouse gases alternative energy sources and other types of environmental recourse actions are needed There are a variety of proposals that involve using vertical ocean pipes to move ... - Read More Tell us what you think of Chemistry 2011 -- we welcome both positive and negative comments. Have any problems using the site? Questions? Chemistry2011 is an informational resource for students, educators and the self-taught in the field of chemistry. We offer resources such as course materials, chemistry department listings, activities, events, projects and more along with current news releases. The history of the domain extends back to 2008 when it was selected to be used as the host domain for the International Year of Chemistry 2011 as designated by UNESCO and as an initiative of IUPAC that celebrated the achievements of chemistry. You can learn more about IYC2011 by clicking here. With IYC 2011 now over, the domain is currently under redevelopment by The Equipment Leasing Company Ltd. Are you interested in listing an event or sharing an activity or idea? Perhaps you are coordinating an event and are in need of additional resources? Within our site you will find a variety of activities and projects your peers have previously submitted or which have been freely shared through creative commons licenses. Here are some highlights: Featured Idea 1, Featured Idea 2. Ready to get involved? The first step is to sign up by following the link: Join Here. Also don’t forget to fill out your profile including any professional designations.
28
Finding a way to build a quantum computer that works more efficiently than a classical computer has been the holy grail of quantum information processing for more than a decade. “There is quite a strong competition at the moment to realize these protocols,” Mark Tame tells PhysOrg.com. The latest experiment performed as a collaboration by a Queen’s University theoretical group and an experimental group in Vienna has “allowed us to pick up the pace” of quantum computing. The joint project’s experiment is reported in Physical Review Letters in an article titled, “Experimental Realization of Deutsch’s Algorithm in a One-Way Quantum Computer.” “This is the first implementation of Deutsch’s Algorithm for cluster states in quantum computing,” Tame explains. Tame along with members of the Queen’s group in Belfast, including Mauro Paternostro and Myungshik Kim joined a group from the University of Vienna, including Robert Prevedel, Pascal Böhi, and Anton Zeilinger (who is also associated with the Institute for Quantum Optics and Quantum Information at the Austrian Academy of Sciences) to perform this experiment. “When performing a quantum algorithm,” says Tame, “the standard approach is based on logical gates that are applied in a network similar to classical computing.” Tame points out that this method of quantum computing is not practical or efficient. “Our quantum computer model uses cluster states, which are highly entangled multi-partite quantum states.” The Irish and Austrian group’s quantum computer makes use of four entangled photons in a cluster state. Tame explains how it works: “Our setup is completely based on light, where quantum information is encoded on each photon. The information is in the polarization of each photon, horizontal or vertical, and superpositions in between. An ultra-violet laser pumps a crystal and produces an entangled pair of photons in one direction. The laser beam then hits a mirror and bounces back to form another pair of entangled photons on its second passage through the crystal. These four photons are then made to interact at beamsplitters to form the entangled cluster state resource on which we perform the quantum computation.” Next, Tame says, come the calculations. “We perform Deutsch’s Algorithm as a sequence of the measurements. When you measure in a specific basis, you can manipulate the quantum information in the photons using their shared entanglement.” He continues with an illustration related to classical computing: “You can think of the cluster state as the ‘hardware’, and the measurements as the ‘software’.” Now that the groups in Belfast and Vienna have proved that Deutsch’s Algorithm works for a cluster-based quantum computer, the next step is to apply it to larger systems. “Right now it’s really just a proof of principle,” explains Tame. “We’ve shown it can be done, but we need to build larger cluster states and perform more useful computations.” Tame admits that this next step is where it gets trickier. “Quantum systems like this can be influenced by small fluctuations in the environment. It can be difficult to get accurate computations using larger resources.” He says that noise resistant protocols need to be developed in order to maintain the coherence of the quantum information. “There’s not a lot of noise in the lab during the implementation of experiments on small numbers of qubits. But as we increase this number there are physical and technological concerns that need to be solved. This is a key issue.” And does Tame have any idea how to solve some of these issues? “We have some schemes at the moment. It’s a work in progress.” He pauses. “But for now it’s exciting to have this proof that quantum computing can be efficiently performed with Deutsch’s Algorithm.” Copyright 2007 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com. Explore further: Mist-collecting plants may bioinspire technology to help alleviate global water shortages
28
March Heat Records Crush Cold Records by Over 35 To 1 The final data is in for the unprecedented March heat wave that was “unmatched in recorded history” for the U.S. (and Canada). New heat records swamped cold records by the stunning ratio of 35.3 to 1. This ratio is almost off the charts, even with the brutally warm August we had, as this chart from Capital Climate shows. For the year to date, new heat records are beating cold records by 22 to 1, which trumps the pace of the last decade by more than a factor of 10! I like the statistical aggregation across the country, since it gets us beyond the oft-repeated point that you can’t pin any one record temperature on global warming. A 2009 analysis shows that the average ratio for the 2000s was 2.04-to-1, a sharp increase from previous decades. Lead author Dr. Gerald Meehl explained, “If temperatures were not warming, the number of record daily highs and lows being set each year would be approximately even.” Meteorologist Jason Samenow points out just how extreme the heat wave was: “More than 7,700 daily record high temperatures were set (or tied, compared to just 287 record lows), in some cases by mind blowing margins and over multiple days. In several instances in the Great Lakes and Upper Midwest region, morning lows even bested record highs and high temperatures soared above mid-summer norms.” Many of the countries leading climatologists and meteorologists have looked at the data and concluded that like a baseball player on steroids, our climate system is breaking records at an unnatural pace. Weather Channel meteorologist Stu Ostro calls the current heat wave “surreal” and explained that “While natural factors are contributing to this warm spell, given the nature of it and its context with other extreme weather events and patterns in recent years there is a high probability that global warming is having an influence upon its extremity.” Meteorologist Dr. Jeff Masters has said, “this is not the atmosphere I grew up with.” He published a detailed statistical analysis concluding, “It is highly unlikely the warmth of the current ‘Summer in March’ heat wave could have occurred unless the climate was warming.” Climate Central pointed out that given the intensity, duration, and geographical breadth of the heat wave, “this may be an unprecedented event since modern U.S. weather records began in the late 19th century.” They interviewed several top scientists who explained global warming’s likely role in helping to make this extreme event so unique. Welcome to the new climate in which heat waves are pushing farther outside the envelope of what has been observed previously during the historical record. To quote Hansen et al. (2011), “Today’s extreme anomalies occur because of simultaneous contributions of specific weather patterns and global warming.” I’m usually very cautious about linking weather events to global warming as there is considerable natural variability in the system, but these are jaw-dropping records and such events are more likely today than 60 years ago. NBC News has a very good story about the cause of the extreme weather. Their chief environmental correspondent Ann Thompson interviews NOAA scientist, Dr. David Easterling: Thompson: But scientists say ping-ponging between weather extremes may be an indicator of a much bigger problem: the heat trapping gases of climate change Easterling: The warming that we’ve seen actually increases the chances, kind of loads the dice that were going to see these kinds of events more often. Thompson: Dr. David Easterling of the National Oceanic and Atmospheric Administration is a co-author of United Nations report out this week that points to climate change as leading to extreme weather events since 1950. Easterling: The unusual warm days and nights, and to some extent heat waves, you can actually begin making that link between climate change and those events. Since the science of attributing extreme events to global warming is still emerging, scientists still disagree to what extent a specific event like this heat wave is driven by global warming. But two of the leading experts explain at RealClimate why even small shifts in average temperature mean “the probability for ‘outlandish’ heat records increases greatly due to global warming.” Furthermore, “the more outlandish a record is, the more would we suspect that non-linear feedbacks are at play – which could increase their likelihood even more.” The really worrisome part is that we’ve only warmed about a degree and a half Fahrenheit in the past century. We are on track to warm five times times that or more this century. In short, we ain’t seen nothing yet! Joe Romm is a Fellow at American Progress and is the editor of Climate Progress, which New York Times columnist Tom Friedman called "the indispensable blog" and Time magazine named one of the 25 "Best Blogs of 2010." In 2009, Rolling Stone put Romm #88 on its list of 100 "people who are reinventing America." Time named him a "Hero of the Environment″ and “The Web’s most influential ... Other Posts by Joseph Romm The Energy Collective - Rod Adams - Scott Edward Anderson - Charles Barton - Barry Brook - Steven Cohen - Dick DeBlasio - Senator Pete Domenici - Simon Donner - Big Gav - Michael Giberson - Kirsty Gogan - James Greenberger - Lou Grinzo - Tyler Hamilton - Christine Hertzog - David Hone - Gary Hunt - Jesse Jenkins - Sonita Lontoh - Rebecca Lutzy - Jesse Parent - Jim Pierobon - Vicky Portwain - Willem Post - Tom Raftery - Joseph Romm - Robert Stavins - Robert Stowe - Geoffrey Styles - Alex Trembath - Gernot Wagner - Dan Yurman
29
SINGAPORE (Reuters) - Scientists have begun drilling ice cores at a shrinking tropical glacier in Indonesia to collect data on climate change, and hope their findings could lead to better predictions about crucial monsoon rains. The team led by alpine glaciologist Lonnie Thompson of Ohio State University is drilling near the summit of 4,884-metre (16,000-foot) Puncak Jaya on Indonesia's part of New Guinea island. The mountain is the highest in Oceania and the only place in the tropical Pacific with glacial ice that scientists can study to see how the climate has changed over centuries. Thompson said this was probably the last chance to take the ice cores because the tiny glaciers on Puncak Jaya, which have lost 80 percent of their ice since 1936, are melting rapidly. "Our effort is as much a salvage mission to get these records, get whatever they offer and to store some of the ice for the future," he told Reuters from the town of Tembagapura near the mountain during a short break from drilling. "The glaciers have very little time left, much shorter than I thought," he said, adding temperatures at the summit were above freezing and with rain every day that was melting the ice fields. Glaciers can store long-term records of the climate. Those on Puncak Jaya have layers like tree rings that are the difference between the wet and dry seasons in the tropics, Thompson said. "You can count these layers back through time and you can get a very precise history," he said. EL NINO CLUES El Nino is a periodic warming of the eastern and central Pacific that can lead to drought in Southeast Asia and Australia and can also affect the monsoon in India while causing floods in parts of South America. Scientists are trying to pin down if global warming will lead to more and stronger El Ninos that could cause more droughts, fires and crop failures in parts of Asia, affecting millions of people and forcing some nations to import more food. Indonesia is particularly susceptible to drought because of its large population and dependence on income from rice, cocoa, palm oil, coffee and other crops. Thompson and his team have so far drilled two cores about 30 metres in length but can't yet say how far back the record goes. He said a 50-metre ice core his team drilled on Kenya's 5,895-metre Mount Kilimanjaro dated back 11,700 years. Thompson has also taken ice cores of Peru's 6,100-metre Nevado Hualcan, on the opposite side of the Pacific. Peru often gets droughts when Indonesia gets wet weather and the idea is to build up a broader set of data covering ice cores, tree rings, corals and other proxies of past climate. Getting the Indonesian ice cores has proved a vast logical challenge, involve shipping in about four tonnes of equipment, including drills and large ice boxes to store the ice cores to make sure they don't melt on the way to Ohio. (Editing by Jeremy Laurence) Trending On Reuters Tens of thousands of people braved heavy rain and lined the streets of Singapore on Sunday to catch a last glimpse of founding PM Lee Kuan Yew as his funeral procession wound through the country he helped build. Full Article Insight - Modi's popularity in rural India punctured by discontent, suicides Full Article
29
A global boom in shale oil production similar to the one already underway in the United States could bring down the price of crude as much as 40 percent and add up to 3.7 percent to world economic output, a study released Thursday said. A study by the PwC consultancy estimated that global production of shale, or tight oil, could gush up to 14 million barrels per day by 2035, or about 12 percent of the world's total oil supply. It estimated this would cut 25 to 40 percent off the projected price of $133 per barrel in 2035 by the US Energy Information Administration, which still assumes low levels of shale oil production. PwC said "we estimate this could increase the level of global GDP in 2035 by around 2.3-3.7 percent," which is worth $1.7-$2.7 trillion (1.3-2.0 trillion euros) at today's global gross domestic product levels. The consultancy said the widespread tapping of shale oil "would revolutionise global energy markets, providing greater long term energy security at lower cost for many countries." Technological breakthroughs in recent years have allowed the recovery of oil and natural gas from shale rock formations that previously could not be exploited. Tapping such "unconventional" resources has led to a boom in US oil production, hitting 910,000 barrels per day in January according to estimates by the International Energy Agency, the highest level in over 30 years. PwC noted this has led to US crude oil prices falling compared to global prices. A boom in shale gas production has also led to far lower natural gas prices in the United States than in other countries.
29
VIEWS: 104 PAGES: 2 POSTED ON: 2/11/2011 The global environment is the entirety of our planet 鈥檚 natural systems. It encompasses ecosystems, climate, geology, regional environments, and human societies and artificial environments. It can also be termed as Mother Nature or Mother Earth in a broader perspective. It can also be described simply as the state of our surroundings. Global Warming Global warming is the phenomenon of abnormal changes in the global environment because of continuously increasing temperatures of the Earth 鈥檚 near-surface air and oceans. One of the main causes of global warming is the thinning of the ozone layer due to the release of ozone destructive chemicals in the atmosphere that breakdown the O2 particles of the said layer. Such occurrence is also known as ozone depletion. The green house effect is also considered a major, if not the biggest contributor to global warming. The green house effect warms the earth by trapping the heat from the sun, preventing it from rising out of our atmosphere. Carbon monoxide and other pollutants make heat remain near the earth 鈥檚 surface, similar to how a green house absorbs heat and prevents it from getting out. Through the years, global warming has become synonymous with climate change because experts believe that of all the effects of the earth surface heating up, climate change is the most disturbing and dangerous. 鈥?Abnormal amounts of rainfall occur all over the world because of the change in weather patterns brought about by changes in temperature. Days or even weeks equivalent of rain sometime pour down in a matter of hours. Such heavy rainfall can cause destructive floodings. 鈥?Abnormally high temperatures can also cause very high evaporation rates that lead to more frequent and longer lasting droughts. Such droughts can lead to famine and food shortage in some areas. 鈥?The rapid changes in temperature have caused more and stronger hurricanes and storms. Another abnormal change in the global environment brought about by global warming is the rapid rising of sea levels. This is because abnormally high temperatures are melting the polar ice caps. Sea levels have already risen severely in the past decades and some countries are beginning to become submerged in water. Ocean waters in some regions have also become acidic and dangerous to life. Acidic waters coupled with high sea-surface temperatures have already claimed the life of over a quarter of the world 鈥檚 known coral reefs. Changes in the global environment due to climate change have caused sudden changes in the planet 鈥檚 ecosystems. Hundreds of plant and animal species which have failed to adapt to the abrupt changes have already faced extinction. Measures need to be taken now before even mankind is pushed to the brink of extinction. One move being pushed by the United Nations is the use of green sources of energy instead of the conventional ones that pollute the environment. Support for the sustainability industry is being called for and waste-to-energy processes are under extensive research and development. The newest discovery in green technologies is the biosphere gasification process which involves the efficient and eco-friendly conversion of solid wastes into green electricity. Pages to are hidden for "Global Environment and Global Warming "Please download to view full document
29
an member station The UN has issued a startling report on likely consequences of climate change, unless world populations change course dramatically. 110 separate governments agreed on the findings line by line, which predict that, among other consequences, crop yields will be affected, civil wars will likely break out as poverty and food scarcity become more commonplace, and economic growth will be stunted. Western cities like Las Vegas are already seeing the drastic results of drought. Is it too late to change? What adaptive measures can be taken in a worst case scenario? William James Smith, UNLV Climate Lab Eric Holthaus, meteorologist, Slate writer McKenzie Funk, author, "Windfall: The Booming Business of Global Warming"
29
LONDON — An “environmental train wreck.” That’s what leading environmental scientists say that Australian Prime Minister Tony Abbott has engineered, in less than one year in office. They say the changes he’s implementing could result in irreversible damage to some of the world’s most fragile ecosystems. And they say they are “screaming in the dark” to get the country’s ultra conservative government to take a more sustainable course, so far with little luck. Of course, not everyone agrees with the scientists, or at least with their priorities. Abbott came to power last September promising to abolish the country’s landmark carbon and mining taxes, and cut “green tape” that he said hindered development. Brendan Pearson, CEO of the Minerals Council of Australia (MCA), said the abolition of these two industry taxes would secure future investment and jobs, help regional communities, increase tax revenues, reduce energy prices, and boost Australia’s international competitiveness. But Professor Bill Laurance — recipient of the Australian Laureate Fellowship, one of Australia’s highest research honors — is astounded by the pace and scope of the environmental rollbacks. He said the proposal to abolish the carbon tax and replace it with a “direct action plan” was just one of a “whole avalanche” of issues that worry Australia’s leading environmental scientists. The list of issues includes: Even more controversially, Abbott’s government has permitted a coal port to dredge up and dump millions of cubic feet of sand into the iconic Great Barrier Reef Marine Park, a decision that the Chairman of the Marine Park Authority has rigorously defended. And in another unprecedented move, the government has asked UNESCO to remove 74,000 hectares of Tasmanian forest from its World Heritage List. A prime ministerial statement has also effectively banned the creation of new National Parks, with Prime Minister Abbott announcing that too much forest was already “locked away.” Laurance, based at James Cook University in Queensland, says Abbott’s National Parks decision had come at a “very bad time,” with some ecosystems in desperate need of protection, such as the Mountain Ash forests in Victoria, home to the critically endangered native Leadbeater’s possum, decimated by logging and wildfires. Things aren't looking good for the Leadbeater's possum. (Wikimedia Commons) “I come from the western US and we are hearing a very similar dialogue to the one used there by conservatives, who say ‘you’re just locking up the forests,’” he says. “That’s an age-old characterization — a way conservatives have historically described areas that they want to get into.” Dr. Chris Fulton, a coral reefs expert at the Australian National University, said a shift in thinking was needed at the highest levels of government. “We are looking at a government that is constantly speaking in terms of nature being there in the service of us, nature being there for us to exploit and use, that nature can only be appreciated by giving us wood or fish or coal,” he says. “But this is nineteenth century or even eighteenth century thinking; We can’t expect a natural resource to go on giving us what we want without it collapsing.” Dr. Thomas Lovejoy, environmental advisor to three American presidents, said he hoped Australia would take a “fresh look” at its forestry policy ahead of the World Parks Congress, set to meet in Sydney in November. He said it appeared that “short-term economics” in Australia were driving key policy initiatives. “Both climate change and biodiversity need more and stronger attention than they are getting,” he added. Laurance explained that the situation is compounded by a shift to the right across the electorate, with conservative governments now in power in all major states, as well as at the federal level. “There’s an abundance of scientific evidence showing that a lot of the Australian ecosystems are in trouble,” he says. But “Abbott is almost a fundamentalist type character. I think his view is ‘these people didn’t vote for me, they’re not going to vote for me,’ so he’s effectively written off that constituency, which of course includes a large part of mainstream Australia. “He reminds me of Reagan or some of the people in the Reagan administration, such as James Watts, Reagan’s secretary of the interior, who was a lightning rod for criticism,” he added. “It’s very polarizing here.” More from GlobalPost: 8 scary (but very real) risks we all face if climate change goes unchecked Fulton laments Abbott’s rolling back of the Environment Protection and Biodiversity Conservation (EPBC) Act, Australia’s seminal legislative tool used to measure whether a development is environmentally sound. “There has been an alarming escalation in what the government is doing to that act in terms of making it conducive to development,” says Fulton. “Decisions that used to be made by the federal government under the EPBC Act are being devolved to other agencies, and in so doing removing the central coordination and management of environmental regulation in Australia.” New offshore oil and gas exploration permits, for example, are now being approved by the National Offshore Petroleum Safety and Environmental Management Authority, says Fulton, the same agency that’s responsible for enforcing safety regulations. “They no longer have to go through a community consultation process, so they can just rubber stamp every single application for oil and gas. “If you think about a threat, on the scale of 1 to 10, dredging in the Great Barrier Reef probably sits toward the bottom, and oil and gas sits toward the top. It only takes one good oil spill and the entire Great Barrier Reef could be wiped out. We’ve seen that already in the Gulf of Mexico.” Corals in the Great Barrier Reef. (Wikimedia Commons) Fulton said other moves to provide the environment minister with legal immunity meant the government and its ministers could not be held accountable for poor decisions that led to environmental disasters. “All those things to me are far more pressing issues that I’d get far more alarmed about (than the dredging).” GlobalPost contacted the Prime Minister’s office and the Department for the Environment for comment but received no replies. Pearson, of the MCA, said the amount of time and effort being wasted on the duplication of regulations was undermining industry and community confidence, while adding little value to environmental or heritage protection. “The MCA shares the view of governments at both the state and national level that there is considerable potential for reducing unnecessary red and green tape without compromising high environmental standards,” he said. “The mining industry isn’t seeking to duck or dodge scientific scrutiny — quite the opposite. We want a project approvals process based on practical concepts of sustainable development, sound science, transparency and scrutiny, procedural certainty, and meaningful community engagement. “It is simply not correct to say that resource developments in Queensland ‘no longer require community consultation’ — far from it. Not only will people affected by development proposals be able to comment, it will be quicker, easier and less costly to do so.” He said MCA member companies were signatories to a UN-recognized sustainable development framework that guaranteed “effective and transparent engagement, communication and independently-verified reporting.” An international call to action Australia has so far resisted calls from the EU and the US to include climate change on the agenda for November’s G20 meeting in Brisbane. Laurance says it could take some level of international embarrassment to force the Abbott government to re-think its entire green agenda. “Tourism’s a huge industry in Australia. You would like to see people start to say that they’re not going to visit Australia because of the astounding hypocrisy that is increasingly becoming the norm here,” he says. “The one thing the Abbott government does seem to understand is money and it needs that kind of talk because they’re clearly just not interested in anyone they see as environmentally oriented.” In June, UNESCO’s World Heritage Committee will decide whether to add the Great Barrier Reef to its “in danger” list, a move that Fulton says “has almost always in the past led the parent country to sit up and take notice.” More from GlobalPost: They razed paradise and put up a soybean lot Fulton advises against turning “the Great Barrier Reef into the Great Slime Reef because tourism dollars are a huge export industry and when our resources bubble eventually bursts and we aren’t able to just dig holes and make money out of it, those export dollars are going to become very, very important.” A recent report by Deloitte Access Economics found that the Great Barrier Reef Marine Park generates some $5.7 billion each year and supports 69,000 jobs, the overwhelming majority of these figures coming from tourism. “I would argue that if we sustainably managed the our coral reefs for tourism, fishing and all the other things we gain from them, then we could manage that into perpetuity, where as thermal coal, and oil and gas are very finite resources offering a finite business plan — they are going to run out, potentially in out life times.”
29
Tuesday, 04 September 2007 11:48 By Micaela Cook, Citzenre Representitive Solar power is gaining popularity and attention in mainstream America, but solar technology has reliably producing clean power for decades, in fact, many photovoltaic systems installed in the 70s are still operating today. Todays typical solar cells have conversion efficiencies of 15 to 20%, but research and development programs aim to increase that to greater than 50%. While solar cells are already used for calculators, watches, satellites, remote telecommunications devices, municipal lighting, off-grid, and grid-tied power, solar energy also offers many possibilities for a sustainable future. Each day, 89 petawatts (thats 89 followed by 15 zeros) of sunlight reach the earths surface. That is almost 6,000 times more than the 15 terawatts of average power consumed daily by humans. As technology improves, and more of this energy is harvestable through photovoltaic (PV) cells, greater quantities of ground-level air pollution emissions will be avoided. Non-renewable sources of power are the greatest source of this type of pollution. Additionally, solar electric generation has the highest power density (global mean of 170 watts per square meter) among renewable energies, and this is without releasing any greenhouse gases whatsoever. By continuing to burn fossil fuels such as coal, gas and oil and clearing forests we are dramatically increasing the amount of carbon dioxide in the Earths atmosphere and temperatures are rising. The world is already changing. Glaciers are melting, plants and animals are being forced from their habitats, and the number of severe storms and droughts is increasing. According to Al Gores www.climatecrisis.net website: The number of Category 4 and 5 hurricanes has almost doubled in the last 30 years. Malaria has spread to higher altitudes in places like the Colombian Andes, 7,000 feet above sea level. The flow of ice from glaciers in Greenland has more than doubled over the past decade. At least 279 species of plants and animals are already responding to global warming, moving closer to the poles. Although burning wood, or for instance oil from soybeans, releases carbon that was recently fixed by the plants from the airand is therefore carbon neutralwe have already overloaded our atmosphere with so many additional tons of greenhouse gases that finding sources of energy that release absolutely no carbon or other greenhouse gases into the atmosphere is most advantageous. Unfortunately, for most individuals, the major barrier to using solar technology is cost. The cost of PV for a residential application is presently between $8 and $10 per watt, which works to about $40,000 to $50,000 for an average 5 kilo-watt system. New York State has offered 50% rebates towards the cost of installation, which has allowed a significant number of people who would not otherwise be able to have a solar system to install one. However, there are, of course, many people who cannot afford even 50% of the installation cost and even after these substantial renewable incentives from the State and the Federal government, the cost of electricity still amortizes out to about 21.5 cents per kWh. That is between two and three times the average cost of electricity a cost premium that is a significant barrier and the main reason the PV market is still in its early adoption stage.. This set of circumstances within the photovoltaic industry has led to the birth of the Citizenre Corporation. According to their website, (www.jointhesolution.com/spiralupworks) Citizenre believes that photovoltaics should play a much more important role in our nations energy infrastructure, more than one fifth of one percent, and has actually laid the ground work to make that possible. Citizenre offers consumers the opportunity to rent a photovoltaic system designed specifically for their residence, without any up front cost. Customers pay for the electricity generated by the panels at the rate they pay to their current utility at the time they sign up. Thus, electricity rates can be locked in for us to 25 years, avoiding the 6% average increase in electricity rates that New York has undergone for the last 10 years. The availability of services likeCitizenres represent an enormous step toward making solar power affordable and practical for the average consumer. Citizenre is currently accepting reservations for their first round of installations in 2008. Micaela Cook is a former installer of solar panels, as well as a distributor for Citizenre. Her class entitled Solar Systems Without The Upfront Investment will be held Wednesday, September 26, from 7-8:15 pm in the classroom at GreenStars West-End Store. By Alexis Alexander, GreenStar's annual member-owner survey will be conducted in March this year. The survey provides the Operations staff and Council valuable feedback from our member-owners so we can better meet their diverse needs. All member-owners who are signed up to receive required mailings or announcements by email will receive a copy of the survey in that way. If you're signed up for emails and don't receive a survey link in your inbox by March 3, please che...
29
Estimates of Warming Gain More Precision And Warn of Disaster By WILLIAM K. STEVENS Published: December 15, 1992 SCIENTISTS may be zeroing in on a tighter estimate of just how much the earth's climate stands to be warmed by industrial waste gases that trap the sun's heat. Such estimates are usually made by computers programmed to simulate the world's climate. These computer models, however impressive, are no better than the assumptions fed into them. Now comes a substantial independent check on the models: an analysis of how the earth's climate responded to changes in atmospheric heat-trapping carbon dioxide and other influences in the distant past, based on geological and geophysical evidence. The new analysis suggests that if the atmospheric carbon dioxide doubles from its present level, the average global climate will become about 4 degrees Fahrenheit warmer. Some previous estimates predicted temperatures much higher or much lower than this. Recent refinements of computer models show similar results; taken together, the new assessments point toward a warming range of 4 to 6 degrees. If no action to restrain carbon dioxide emissions is taken, say the authors of the study, the earth's temperature will soar over the next century to perhaps the highest levels in a million years. This would probably alter the earth's climate with disruptive and possibly catastrophic consequences for both human society and natural ecosystems. Climate assessments like these are no mere academic exercise but an essential guide to the nations of the world in deciding whether to take stronger action underthe global warming treaty signed last June in Rio de Janeiro. The Clinton Administration is expected to favor stronger controls on burning oil and coal, which produce carbon dioxide, than its predecessor. Assuming moderate world population and economic growth, the amount of carbon dioxide in the atmosphere is expected to double by the end of the next century if no further action to reduce emissions is taken. The latest scientific study, reported in the current issue of the British journal Nature, appears to bolster the case for emission reductions. It uses climatic data from two periods in the past, one 20,000 years ago, in the depths of the last ice age, and the other in the mid-Cretaceous period 100 million years ago, when the temperature was 18 degrees warmer than now. From study of these two exceptional periods, the authors have produced one of the first independent tests of computer predictions that until now have been virtually the only basis for assessing future warming. Those predictions, which say that doubled carbon dioxide concentrations would cause a warming of 3 to 8 degrees, have been the scientific basis of international policy until now. The wide range results from uncertainties about the climate system built into the models. Other analyses have suggested that the warming could be as little as 1 degree or as much as 9 degrees. The average global surface temperature is now a little less than 60 degrees. According to various estimates, this is 5 to 9 degrees warmer than in the last ice age. Factors in Global Temperatures In the latest study, Dr. Martin I. Hoffert of New York University and Dr. Curt Covey of Lawrence Livermore National Laboratory in California analyzed data, largely developed from geological studies, on how the climate of the two ancient epochs changed in response to various influences. These forces, each of which leaves some measurable change in the geological record, include solar radiation and heat-trapping gases like carbon dioxide, which are produced naturally as well as by human industry. The analysis let the researchers calculate a pivotal property, the sensitivity or extent of response by the climate to each of these "forcing" factors. They found, for instance, that the earth's climate during the mid-Cretaceous was sensitive to carbon dioxide such that a doubling of the atmospheric content of the gas would raise the average global temperature by 4.5 degrees. The sensitivity of the ice-age climate 20,000 years ago was similar: a doubling of carbon dioxide would have produced a rise of 3.6 degrees. Combining the two results, Dr. Hoffert and Dr. Covey calculate that the climate's basic sensitivity to carbon dioxide is such that a doubling of the gas leads to a global warming of about 4 degrees, give or take 1.6 degrees. The finding "adds to the weight of evidence" favoring the findings of a panel set up by the United Nations to advise signatories to the climate treaty, Dr. Eric J. Barron, an earth scientist at Pennsylvania State University, wrote in a commentary in Nature. The United Nations group, called the Intergovernmental Panel on Climate Change, said earlier this year that its "best estimate" of the warming produced by a doubling of atmospheric carbon dioxide was 4.5 degrees. Effect of a Mask Some critics of the conventional wisdom on global warming have pointed out that the earth has not warmed up over the last century by nearly as much as the computer models say it should as a result of increasing carbon dioxide. Other climatologists argue that industrial processes also exert a cooling effect by depleting the stratospheric ozone layer and emitting airborne aerosols that relect sunlight,and that this partly masks the larger warming effect. If emissions were curbed, the climatolotists say, the cooling effect of aerosols would dissipate quickly; but carbon dioxide would remain in the atmosphere for decades. Two climatologists at the University of East Anglia, Dr. Tom Wigley and Dr. Sarah Raper, have now calculated the extent of the postulated cooling effect. They find that without it, climate over the last century would have warmed by 6 degrees, much as the computer models have predicted. In their new analysis, Dr. Hoffert and Dr. Covey examined a number of reconstructions of the ice-age and mid-Cretaceous climates and calculated the strength of all the factors that both warm the climate and cool it. By combining these factors, or "forcings," they arrived at a net warming effect expressed in watts per square meter. Then they examined the corresponding global temperatures of the two periods as revealed, for example, in changing isotopes of oxygen and carbon in ocean sediments. From this information they calculated the change in both temperature and forcings between then and today; and from that, the climate's sensitivity as expressed by the temperature change resulting from doubled carbon dioxide. Common Findings In a similar exercise some time ago, Dr. James E. Hansen and colleagues at the NASA Goddard Institute for Space Studies in New York also examined climate data from the last ice age and found thata doubling of carbon dioxide would produce a warming of about 5.4 degrees. The Hoffert-Covey study takes the analysis a big step further: by analyzing both a colder and a warmer climate than today's, it suggests a general level of climate sensitivity that applies universally, in all eras including today's. Dr. Hansen, who has been outspoken in asserting that human-induced global warming is under way, characterizes the kind of paleoclimatic analysis performed by Dr. Hoffert and Dr. Covey as "extremely valid; the best method we have for estimating climate sensitivity." Questions About Study The weakness of the Hoffert-Covey calculation, a number of climatologists say, is that it introduces uncertainties of its own. Some of the data are "very, very shaky," said Dr. Syukuro Manabe, a climate expert at the National Oceanographic and Atmospheric Administration's Geophysical Fluid Dynamics Laboratory at Princeton University. But Dr. Barron pointed out that the uncertainties have been factored into the Hoffert-Covey analysis and are reflected in the margin of error. And Dr. Manabe, despite his reservations, said of the Hoffert-Covey study: "I feel very comfortable with their conclusions; I think it is encouraging" in helping to produce a more precise assessment. The beauty of the Hoffert-Covey analysis of ancient climates, as its proponents see it, is that it gets around the key unknown that makes the computer models' predictions so uncertain: climate modelers do not yet know enough about the net effect on the earth's heat balance of clouds, which both trap and reflect warming radiation depending on circumstances, altitude and the type of cloud. Complex Network Although carbon dioxide is known to trap heat, the heat sets off an extremely complex network of interactions within the climate system, some of which amplify the heating and some of which lessen it. Climatologists have been trying to include all these interactions in the computer models, but the role played by clouds, especially, has eluded them. The Hoffert-Covey analysis implicitly includes the effect not only of clouds but of all the interactions, since the actual temperatures measured in those ancient periods would be the net result of the clouds and other feedbacks. Dr. Hoffert and Dr. Covey assert, and Dr. Barron agrees, that the analysis puts to rest claims that a doubling of carbon dioxide would produce a relatively negligible warming of 1 or 2 degrees. That, said Dr. Barron, is "fairly clear; that point in the study is pretty robust." The basic reason, says Dr. Hoffert, is that a climate whose responses are that sluggish would never have been able to produce the temperature extremes of both the ice age and the mid-Cretaceous. There is no guarantee that the carbon dioxide buildup in the atmosphere would halt once it had doubled, and some analyses indicate that this benchmark will be exceeded late in the next century if the present rate of carbon dioxide emissions continues. Using estimates of future "business as usual" carbon-dioxide emissions made by the United Nations panel, Dr. Hoffert and Dr. Covey calculate that if their findings on climate sensitivity are right, the global climate would warm by 5.4 to 7.2 degrees by the year 2100. "Such a warming," they wrote, "is unprecedented in the past million years." All agree that there is a long way to go before truly precise and confident predictions about global warming can be made. Quite apart from the question of narrowing down the climate's general sensitivity to carbon dioxide emissions, there is the even more difficult and complex matter of how the change will be distributed from one region to another and what it will do, in practical terms, to the climate system. The latest findings, says Dr. Hansen, "imply that you would have a significant shift of climate zones with a doubling of carbon dioxide, but the details of that impact is something we're trying to understand."
29
Current air quality legislation in Europe will lead to significant improvements in particulate matter pollution but without further emission control efforts many areas of Europe will continue to see air pollution levels above the limits ... - Read More Toyota Central R& amp D Labs Inc in Japan have reviewed research that might be leading the way towards a new generation of automotive catalytic converters Catalytic converters that change the toxic fumes of automobile ... - Read More As many places in the U S and Europe increasingly turn to biomass rather than fossil fuels for power and heat scientists are focusing on what this trend might mean for air quality and people's ... - Read More Researchers at the Department of Energy's SLAC National Accelerator Laboratory are trying to find out why uranium persists in groundwater at former uranium ore processing sites despite remediation of contaminated surface materials two decades ago ... - Read More With endocrine disrupting compounds affecting fish populations in rivers as close as Pennsylvania's Susquehanna and as far away as Israel's Jordan a new research study shows that soils can filter out and break down at ... - Read More In the midst of the California rainy season scientists are embarking on a field campaign designed to improve the understanding of the natural and human caused phenomena that determine when and how the state gets ... - Read More A NASA study using two years of observations from a novel mountaintop instrument finds that Los Angeles' annual emissions of methane an important greenhouse gas are 18 to 61 percent higher than widely used estimates ... - Read More Scientists have come up with a way of creating sensors which could allow machines to smell more accurately humans Every odour has its own specific pattern which our noses are able to identify Using a ... - Read More Every several years workers apply a clay mask to India's iconic but yellowing Taj Mahal to remove layers of grime and reveal the white marble underneath Now scientists are getting to the bottom of what ... - Read More Snow is not as white as it looks Mixed in with the reflective flakes are tiny dark particles of pollution University of Washington scientists recently published the first large scale survey of impurities in North ... - Read More A new study into the pre industrial baseline levels of heavy metals in sediment carried by the Athabasca River shows that emissions from the Alberta oil sands and other human activities have not yet increased ... - Read More Just two hours of exposure to diesel exhaust fumes can lead to fundamental health related changes in biology by switching some genes on while switching others off according to researchers at the University of British ... - Read More Nanoparticles extremely tiny particles measured in billionths of a meter are increasingly everywhere and especially in biomedical products Their toxicity has been researched in general terms but now a team of Israeli scientists has for ... - Read More Researchers at the University of Guanajuato UGTO in middle Mexico developed an extraction column which recovers metals companies use in their production processes and thus avoid environmental pollution and lessen economic losses The column operates ... - Read More Researchers from the Madariaga Lab at the Universidad Politécnica de Madrid have carried out a series of trials to study the explosiveness of sludge on thermal drying plants of sewage sludge The obtained result will ... - Read More Driving vehicles that use electricity from renewable energy instead of gasoline could reduce the resulting deaths due to air pollution by 70 percent This finding comes from a new life cycle analysis of conventional and ... - Read More Ever wonder what's in the black cloud that emits from some semi trucks that you pass on the freeway Lawrence Berkeley National Laboratory Berkeley Lab scientist Thomas Kirchstetter knows very precisely what's in there having ... - Read More Recent research from the University of Alberta reveals that contrary to current scientific knowledge there's no atmospheric lead pollution in the province's oil sands region William Shotyk a soil and water scientist who specializes in ... - Read More A team of researchers from the Cockrell School of Engineering at The University of Texas at Austin and environmental testing firm URS reports that a small subset of natural gas wells are responsible for the ... - Read More Not all boreholes are the same Scientists of the Karlsruhe Institute of Technology KIT used mobile measurement equipment to analyze gaseous compounds emitted by the extraction of oil and natural gas in the USA For ... - Read More Tell us what you think of Chemistry 2011 -- we welcome both positive and negative comments. Have any problems using the site? Questions? Chemistry2011 is an informational resource for students, educators and the self-taught in the field of chemistry. We offer resources such as course materials, chemistry department listings, activities, events, projects and more along with current news releases. The history of the domain extends back to 2008 when it was selected to be used as the host domain for the International Year of Chemistry 2011 as designated by UNESCO and as an initiative of IUPAC that celebrated the achievements of chemistry. You can learn more about IYC2011 by clicking here. With IYC 2011 now over, the domain is currently under redevelopment by The Equipment Leasing Company Ltd. Are you interested in listing an event or sharing an activity or idea? Perhaps you are coordinating an event and are in need of additional resources? Within our site you will find a variety of activities and projects your peers have previously submitted or which have been freely shared through creative commons licenses. Here are some highlights: Featured Idea 1, Featured Idea 2. Ready to get involved? The first step is to sign up by following the link: Join Here. Also don’t forget to fill out your profile including any professional designations.
29
Simple measures of ozone depletion in the polar stratosphere R. Müller1, J.-U. Grooß1, C. Lemmen1,*, D. Heinze1, M. Dameris2, and G. Bodeker3 1ICG-1, Forschungszentrum Jülich, 52425 Jülich, Germany 2DLR, IPA, Oberpfaffenhofen, Germany 3NIWA, Private Bag 50061, Omakau Central Otago, New Zealand *now at: Copernicus Instituut voor Duurzame Ontwikkeling en Innovatie, Universiteit Utrecht, 3584CS Utrecht, The Netherlands and Institut für Küstenforschung, GKSS-Forschungszentrum Geesthacht GmbH, 21502 Geesthacht, Germany Abstract. We investigate the extent to which quantities that are based on total column ozone are applicable as measures of ozone loss in the polar vortices. Such quantities have been used frequently in ozone assessments by the World Meteorological Organization (WMO) and also to assess the performance of chemistry-climate models. The most commonly considered quantities are March and October mean column ozone poleward of geometric latitude 63° and the spring minimum of daily total ozone minima poleward of a given latitude. Particularly in the Arctic, the former measure is affected by vortex variability and vortex break-up in spring. The minimum of daily total ozone minima poleward of a particular latitude is debatable, insofar as it relies on one single measurement or model grid point. We find that, for Arctic conditions, this minimum value often occurs in air outside the polar vortex, both in the observations and in a chemistry-climate model. Neither of the two measures shows a good correlation with chemical ozone loss in the vortex deduced from observations. We recommend that the minimum of daily minima should no longer be used when comparing polar ozone loss in observations and models. As an alternative to the March and October mean column polar ozone we suggest considering the minimum of daily average total ozone poleward of 63° equivalent latitude in spring (except for winters with an early vortex break-up). Such a definition both obviates relying on one single data point and reduces the impact of year-to-year variability in the Arctic vortex break-up on ozone loss measures. Further, this measure shows a reasonable correlation (r=–0.75) with observed chemical ozone loss. Nonetheless, simple measures of polar ozone loss must be used with caution; if possible, it is preferable to use more sophisticated measures that include additional information to disentangle the impact of transport and chemistry on ozone. Citation: Müller, R., Grooß, J.-U., Lemmen, C., Heinze, D., Dameris, M., and Bodeker, G.: Simple measures of ozone depletion in the polar stratosphere, Atmos. Chem. Phys., 8, 251-264, doi:10.5194/acp-8-251-2008, 2008.
29
The curved "terminator" between day and night is seen in a composite view from space over Africa and Europe. / NASA Who's your hero? Superman? Batman? The Environmental Protection Agency? The answer may say something about how you read news stories about climate change, suggests a scientific experiment that tested how people think about science. President Obama made "carbon pollution" the bad guy in his speech this week outlining federal steps to cut greenhouse gas emissions, particularly carbon dioxide from power plants. Carbon dioxide released by burning coal, oil, natural gas and other fossil fuels is the leading "greenhouse" gas that retains heat in the atmosphere. Such gasses are the leading drivers behind global warming, including the roughly 1.3-degree increase in average surface temperatures nationwide over the past century. Temperatures are likely to increase even more in this one, according to the U.S. National Academy of Sciences and other scientific bodies. How you read all that information may depend on your outlook on life, suggests the recent Political Psychology journal study by political scientists Michael Jones of Virginia Tech and Geoboo Song of the University of Arkansas. The study quizzed 2,005 people on their environmental views, then sorted them into three common types seen in past psychology studies examining perspectives on the environment: "hierarchs" who view experts as necessary to help the planet avert precarious environmental disasters; "egalitarians" who are cautious about human activities they see as threatening a fragile environment; and "individualists" who see see nature as resilient as long as events are allowed to run their natural course. Next they asked separate groups of these folks to read 800-word news stories about climate change. All these stories contained factual information about global warming that was identical. All of it was taken from the 2007 Intergovernmental Panel on Climate Change report, an exhaustive summary of climate science assembled by a worldwide group of scientists. The only difference between the news stories was that they swapped out who was the hero and the bad guy in each one. Each person read only one story. Here's how it worked: â?¢In the egalitarian-themed story, selfish corporations and governments have driven the environment "to the brink of destruction," and environmental groups are the heroes leading a call for using solar, wind and other "renewable" energy sources to beat greenhouse gases. â?¢In a hierarchical-themed one, the heroes are governments and scientists who tame runaway markets and population increases that threaten the environment by using nuclear energy that obviates the need for fossil fuels. â?¢In the individualist-themed one, the good guys are industry think-tanks and pundits who face "naive but dangerous," idealist and self-interested government villains standing in the way of free-market solutions to climate change, such as creating a business to buy and sell trading rights to carbon emissions. As a control measure, they asked some folks to read a fact sheet with the information in the stories, widely seen as the worst way to get information across to people. (The kind of list you see above you right now. Did you read it?) Past work by Jones and his colleagues has shown that people retain information better from stories that are congruent with their outlook, a kind of "motivated reasoning" where people reinforce messages they already have in their heads and downplay anything jarring with the established views. In the new study, the researchers wanted to look under the hood on how this happens, so they asked people to categorize the information in the stories. By seeing how different kinds of people sorted 27 different terms found in the stories - words for heroes, villains and adjectives such as "scientific expertise," "environmentalist" and "terrifying" - they hoped to understand how people were thinking after reading them. People in the experiments were asked to sort all the terms into six categories of their own choosing.Participants were told there was no right or wrong answer. What the study found is that asking people to read a story that didn't match their environmental outlook scrambled how they categorized the terms used in the news stories. Individualists who read an individualist-themed story clustered environmental groups into one category and put "industry," "competition" and "free markets" into another. Nice and neat. Individualists asked to read the egalitarian story seemed confused, putting the free-market Cato Institute into the same category as the environmentalist Club of Rome, or putting "cap and trade" (a market for pollution trading) into a category with "population control." "Overall, people sorted things differently depending on whether they had read a story congruent or incongruent with their views," Jones says, at least for individualists and egalitarians (Jones doesn't label these two groups as libertarians and environmentalists, by the way, but a casual reader might). "Heroes are clustered with policy solutions," the study says. "(N)egative adjectives are clustered with villains and alternative policy solutions." In other words, reading a story at odds with your world view scrambled how clearly folks could think about the information they had read. "If the story did not line up with their cultural orientation, then it might as well have been a list," the study concludes. "The findings furnish a stunningly vivid demonstration of how complex beliefs about climate change are and how sensitive they are to cultural meanings," says psychology professor Dan Kahan of Yale Law School, who studies how people think about scientific information. Kahan says it would be a mistake to see the study's results as a "how to" guide for writing news stories. They point to different ways that people think about the news and offer avenues to understanding the best way to get information across to them. That matters because these very kinds of stories are battling it out right now over climate. Obama decried "carbon pollution" and called for "using less dirty energy, using more clean energy, wasting less energy" in his speech. One of his critics, House Science committee chairman Lamar Smith, R.-Texas, countered by saying, "It is only through sustained economic growth that we will be able to make the investments in research and technology necessary to fully understand and properly deal with problems like climate change." Both arguments could have come straight out of the Political Psychology journal study. Fans of communication experiments might note that the big applause line in Obama's speech was his comparison of climate change naysayers to "a meeting of the Flat Earth Society," The joke seems to have struck home with Obama's political foes, notes Politico's Darren Goode, who sees the climate debate moving from the science to economics after the speech. How will that turn out? For anyone looking to the study as a guide to future climate debates, there was at least one exception in its results worth noting. Environment-loving egalitarians, as you might expect, clustered the information in the stories better after reading the egalitarian story. Unlike elsewhere in the study, hierarchic and individualistic readers also tended to sort information in an egalitarian-leaning way after reading the egalitarian story about environmentalists championing renewable energy. It could be that everybody likes windmills and nobody really likes nukes, Jones suggests. "Or it could be that the egalitarian story is a kind of dominant one in our culture underneath everything else." Copyright 2015 USATODAY.com Read the original story: Storytelling science illuminates climate views
29
The Obama administration announced first-ever regulations setting strict limits on the amount of carbon pollution that can be generated by any new US power plant, which quickly sparked a backlash from supporters of the coal industry and are certain to face legal challenges. The US Environmental Protection Agency's long-awaited guidelines would make it near impossible to build coal plants without using technology to capture carbon emissions that foes say is unproven and uneconomic. The rules, a revision of a previous attempt by the EPA to create emissions standards for fossil fuel plants, are the first step in President Barack Obama's climate change package, announced in June. The revised rule contained a few surprises after the agency held extensive discussions with industry and environmental groups, raising concerns by industry that the EPA's new restrictions on existing power plants, due to be unveiled next year, will be tough. Sky not falling But the regulations announced on Friday cover only new plants. Under the proposal, new large natural gas-fired turbines would need to meet a limit of 1000 pounds of carbon dioxide per megawatt hour, while new small natural gas-fired turbines would need to meet a limit of 1100 pounds of CO2 per MWh. New coal-fired units would need to meet a limit of 1100 pounds of CO2 per MWh but would be given "operational flexibility" to achieve those levels, the agency said. The most efficient coal plants currently in operation emit at a rate of at least 1800 pounds of CO2 per MWh. In a speech at the National Press Club in Washington on Friday, EPA Administrator Gina McCarthy discussed the rationale behind the new rules, and defended Obama's climate plan, which opponents say amounts to a "war on coal." "There needs to be a certain pathway forward for coal to be successful," she said, adding that "setting fair Clean Air Act standards does not cause the sky to fall." Still, stocks of coal mining companies such as Alpha Natural Resources Inc, Peabody Energy and Arch Coal fell on Friday and are down more than 25 per cent for the year to date. Standards for cleaner air "Today's announcement ... is direct evidence that this Administration is trying to hold the coal industry to impossible standards," said senator Joe Manchin, a Democrat from the coal-producing state of West Virginia. The US Chamber of Commerce, which represents more than 3 million businesses, said the EPA's strategies "will write off our huge, secure, affordable coal resources". In her first major speech since being confirmed to the EPA's top job in July, McCarthy described her commitment to cleaner air in sometimes emotional terms, focused on the impact of pollution on public health. "It's not just the elderly who suffer from air pollution. So do children - especially children in lower income and urban communities," she said. "If your child doesn't need an inhaler, then you are one very lucky parent." Capture technology commercially unproven Under the new rules, any new coal plant built in the United States would need to install technology to capture its carbon waste, known as carbon capture and storage (CCS). That technology, which aims to prevent the release of large volumes of carbon into the atmosphere, is controversial because it is currently not yet operational on a commercial scale, an issue likely to be central to legal challenges to the EPA. By giving coal plants seven years, rather than the 30 years proposed in 2012, to achieve an emissions rate below 1100 lbs per MWh, the EPA is showing that it has full confidence in the nascent CCS technology. "We now have enough information and confidence to say that a CCS option with coal meets the test of being the best system of emission reduction," David Doniger, policy director of the Natural Resources Defense Council's Climate and Clean Air Program, told Reuters. The EPA previously issued a version of the rule last year but made changes to address potential legal weaknesses and to factor in more than 2 million public comments. The EPA will launch a fresh public comment period after Friday's announcement. It is due to issue a proposal to address emissions from existing power plants - which account for nearly a third of US greenhouse gas emissions - by June 2014. McCarthy, the EPA boss, said on Friday that people should not look at the proposal for new plants and then assume that the future rule for the existing fleet will be similar.
29
- 589 million people in Africa live without access to a public electricity facility - Nuru Energy has created a pedal generator that allows light and mobile phone recharging - The company says its products are more affordable and reliable than other energy solutions - It has set up a network of micro entrepreneurs who sell and recharge the lights When night falls over Rwanda, many rural communities far removed from the country's electricity grid descend into darkness. Unplugged from the power lines, households in these areas rely mainly on fuel-based devices such as kerosene lamps for access to light. Such lanterns, however, are polluting and costly: They emit toxic fumes, pose fire hazards and also put a strain on family budgets. But recently, an innovative solution has emerged to offer affordable and efficient electricity to low-income households while benefiting the communities by providing jobs to local populations. Called POWERcycle, Nuru Energy says it has developed "the world's first commercially available pedal generator" -- a foot or hand-powered device that can recharge up to five modular light emitting diode (LED) lamps in approximately 20 minutes, as well as power mobile phones and radios. The company says each of its portable LED lamps provides one week of light to a rural household. It also claims that its products are more affordable and reliable than other forms of off-grid offerings that have been developed in recent years, including solar lamps or home solar lighting systems. "We looked around and said, well, what is the one energy resource that's untapped in this environment? And human power really came to mind," says Sameer Hajee, chief executive and co-founder of Nuru Energy. "We thought, well, if we can harness human energy in a way that we can create economic opportunity and low-power electricity, wouldn't that be a game changing solution?" According to Lighting Africa, a joint World Bank - IFC program developed to increase access to clean sources of energy for lighting, 589 million people in the continent live without access to a public electricity facility. The group says African poor rural households and small businesses pay $10 billion per year for lighting purposes, while communities not connected to the grid spend $4.4 billion annually on kerosene. Looking to address the issue of energy poverty, Hajee, a social entrepreneur with a lot of experience in international development, spent more than a month in Rwanda in 2008, trying to figure out what were the energy needs of the country's off-grid population. What he found out was "actually quite basic [energy needs]," he says. "It's light, it's cooking, it's mobile phone recharging and radio." Read also: Unplugging from the world's power lines With help from the World Bank, Hajee co-founded Nuru Energy and in 2009 the company started testing its products in the field. Hajee quickly realized, however, that innovative technology was not enough for the project to be successful in a place like rural Rwanda. His company also needed to adopt a creative approach in the distribution front. Read also: Rwanda's B-Boys "We couldn't just sell product -- we had to actually get involved in the value chain downstream," he says. "We thought, well, if the generator can recharge five lights so quickly, could this not be the basis of a recharging business for a local entrepreneur?" As a result, the company decided not to sell its products directly to customers. Instead, it set up a network of village-level entrepreneurs who are responsible for marketing, selling, and recharging the lights. Hajee says this unique model of distribution has revolutionized the lives of both micro-entrepreneurs and customers. "If you look at this from the standpoint of the customer," says Hajee, "they would purchase the light for $6 and then they would pay about 20 U.S. cents per week for lighting. This is compared to about $2 a week that they would spend on kerosene before. So it's 10 times cheaper solution for them. "From the entrepreneur's perspective, in 20 minutes of pedaling, they're recharging five lights, earning about $1 -- any of us that work in Africa know that that's much more than people make in an entire day. So it's a huge value proposition for the customer and for the entrepreneur." Hajee notes that this model can easily be emulated across rural Africa. He says that Nuru Energy, which currently focuses on East Africa and India, has already been approached by a number of potential joint venture partners to roll out the project in other parts of the continent. "I really hope that what we're providing here is a stopgap solution to the immediate energy needs of...rural populations," says Hajee. "What I would really hope is that, you know, there's certainly effort needed in providing grid quality electricity to these populations. It'll take some time."
29
Latest Antarctic Bottom Water Stories Far beneath the surface of the ocean, deep currents act as conveyer belts, channeling heat, carbon, oxygen and nutrients around the globe. In the mid-1970s, the first available satellite images of Antarctica during the polar winter revealed a huge ice-free region within the ice pack of the Weddell Sea. This ice-free region, or polynya, stayed open for three full winters before it closed. By collecting water samples up to six kilometers below the surface of the Southern Ocean, UNSW researchers have shown for the first time the impact of ocean currents on the distribution and abundance of marine micro-organisms. - The governor of a province or people.
29
Latest Global warming Stories Australia canceled its deeply unpopular carbon tax that has driven up costs for industry and consumers while doing nothing for the environment, say Friends of Science, citing media reports and WASHINGTON, July 29, 2014 /PRNewswire-USNewswire/ -- Emerald Cities Collaborative President and CEO Denise Fairchild released the following statement on the new White House report, The Cost The world faces a small but substantially increased risk over the next two decades of a major slowdown in the growth of global crop yields because of climate change. Scientists have long been concerned that global warming may push Earth's climate system across a "tipping point," where rapid melting of ice and further warming may become irreversible -- a hotly debated scenario with an unclear picture of what this point of no return may look like. Statistical analysis of average global temperatures between 1998 and 2013 shows that the slowdown in global warming during this period is consistent with natural variations in temperature For many people, nothing beats eating a good burger or steak, but two recently-published studies caution that those carnivorous culinary habits could have a significant impact on the environment and contribute to climate change. NASA's Orbiting Carbon Observatory-2, which launched on July 2, will soon be providing about 100,000 high-quality measurements each day of carbon dioxide concentrations from around the globe. Political ideology, education levels affect when people search for climate information New University of Alaska Fairbanks research indicates that arctic thermokarst lakes stabilize climate change by storing more greenhouse gases than they emit into the atmosphere. The current study used shark teeth collected from a new coastal site on Banks Island, which allowed them to gain a more complete understanding of the changes in ocean water salinity across a broader geographic area. An urban heat island (UHI) is a metropolitan area that is drastically warmer than its surrounding rural areas because of human activities. The phenomenon was first looked into and described by Luke Howard during the 1810s, although he wasn’t the one to name the phenomenon. The difference in temperature is normally bigger at night as opposed to during the day, and it most obvious when winds are weak. Seasonally, UHI is seen during the summer and the winter. The key cause of the urban heat... Climate change is a substantial and lasting change in the statistical distribution of weather patterns over periods of time ranging from decades to millions of years. It might be a change in the average weather conditions, or in the distribution of weather around the average conditions. Climate change is a result of factors that include oceanic processes, biotic processes, variations in solar radiation received buy Earth, volcanic eruptions, and plate tectonics, and human induced alterations... Being a meteorologist for over thirteen years you start to take note of many things in the atmosphere and how they repeat themselves. Our Climate is no different. The definition of climate is stated as: the collective weather data in regards to moisture and temperature for over 30 years for the same location. So to better understand our climate we need to look at this. First, we have average temperatures for given places based on the 30 year average. Some years the temps are warmer or... The water cycle (or hydrologic cycle) describes the continuous movement of water above, below, and on the planet. Since the water cycle is in fact a "cycle", there is no beginning or end. Water exists in three states: liquid, vapor, and ice. Although the balance of water on our planet is fairly constant, individual water molecules may come and go. The water cycle is driven by the sun. The sun heats the oceans and allows water to evaporate into the air. The sun also heats snow and ice which... Arctic haze is a phenomenon that occurs in the atmosphere at high latitudes in the Arctic due to air pollution. What distinguishes Arctic haze from haze found elsewhere, is the ability of its chemical ingredients to endure in the atmosphere for a longer period of time compared to other pollutants. Due to limited snowfall, rain, or turbulent air to displace pollutants from the polar air in the spring, Arctic haze can continue for more than a month in the northern atmosphere. Arctic haze was... - To play, gamble. - To impose upon; delude; trick; humbug; also, to joke; chaff. - A deceitful game or trick; trickery; humbug; nonsense.
29
In 2011, an energy firm hired by the state of California estimated that a 1,750-square-mile rock formation extending from Sacramento to Los Angeles could yield 13.7 billion barrels of oil, based on existing extraction technologies. That projection spurred hopes of an energy boom in the state like those that have boosted the economies of North Dakota and Texas. In fact the University of Southern California forecast last year that the Monterey Shale formation could create up to 2.8 million new jobs and generate as much as $24.6 billion per year in new tax revenue in California by 2020. But last month scientists from the U.S. Energy Information Administration issued a report indicating that current extraction methods, including hydraulic fracturing, or "fracking," would yield only 600 million barrels of oil from the Monterey Shale, 96 percent less than the earlier projection. "From the information we've been able to gather, we've not seen evidence that oil extraction in this area is very productive using techniques like fracking," said John Staub, who led the energy agency's study. Staub added that compared with oil production at North Dakota's Bakken Shale formation and the Eagle Ford Shale in Texas, "the Monterey formation is stagnant." USC economics professor Adam Rose, who coauthored last year's study on the economic impact of the Monterey Shale, called the new estimate "a phenomenal cutback." "It's amazing in terms of that much refinement in the numbers," he said. The news had some environmental activists stepping up their calls for California Gov. Jerry Brown (D) and lawmakers to put an end to "fracking" in the state. "The myth of vast supplies of domestic oil resources and billions in potential revenue from drilling in California by the oil industry has been busted," said San Francisco billionaire Tom Steyer, founder of the nonprofit group NextGen Climate. "Our leaders in Sacramento can no longer afford to pin our hopes on the false promises of a fossil fuel windfall — especially when our state is poised to lead the nation and the world toward a cleaner, more sustainable energy economy." Zack Malitz, campaign manager for the San Francisco-based liberal activist group CREDO, likewise, said the new estimate means "there is now no longer any political gain to be had for the governor in supporting fracking and putting our state at risk from water contamination, earthquakes and climate change." "He must enact a moratorium," he said. But a push for such a ban failed in the state's Legislature last year. And a bill (SB 1132) introduced this year by Sen. Holly Mitchell (D) has failed to make it out of the Senate. The oil industry, meanwhile, doesn't appear ready to raise the white flag. "We've always been quite clear that there are challenges to producing oil out of the Monterey" Shale that differ from those associated with the formations in North Dakota, Texas and elsewhere, said Tupper Hull, vice president of the Western States Petroleum Association. "I have every confidence that the oil companies possess the experience and the ability to innovate. If anyone can figure it out, they can figure it out." And Severin Borenstein, director of the University of California Energy Institute, said that although "this is definitely a huge setback to the expansion of oil production in California...I would not at all say the game is over." "It is way too early to say that this is the death of fracking in California. Technology only moves forward, and I am sure there is going to be millions of dollars spent trying to make it better specifically for California because there is so much potential." (SAN JOSE MERCURY NEWS, LOS ANGELES TIMES, STATE NET) The above article is provided by the State Net Capitol Journal. State Net is the nation's leading source of state legislative and regulatory content for all states within the United States. State Net daily monitors every bill in all 50 states, the District of Columbia and the United States Congress - as well as every state agency regulation. Virtually all of the information about individual bills and their progress through legislatures is online within 24 hours of public availability. If you are a lexis.com subscriber, you can access State Net Bill Tracking, State Net Full Text of Bills and State Net Regulatory Text . If you are interested in learning more about State Net, contact us. To subscribe to the Capitol Journal and access archived issue go to the State Net Capitol Journal. For more information about LexisNexis products and solutions, connect with us through our corporate site.
29
New study finds global warming, melting sea ice, connected to polar vortex As the world gets warmer, parts of North America, Europe and Asia could see more frequent and stronger visits of cold air, a new study says. |Report an Error| Share via Email WASHINGTON—As the world gets warmer, parts of North America, Europe and Asia could see more frequent and stronger visits of cold air, a new study says. Researchers say that’s because of shrinking ice in the seas off Russia. Normally, sea ice keeps heat energy from escaping the ocean and entering the atmosphere. When there’s less ice, more energy gets into the atmosphere and weakens the jet stream, the high-altitude river of air that usually keeps Arctic air from wandering south, said study co-author Jin-Ho Yoon of the Pacific Northwest National Laboratory in Richland, Washington. So the cold air escapes instead. That happened relatively infrequently in the 1990s, but since 2000 it has occurred nearly every year, according to a study published Tuesday in the journal Nature Communications. A team of scientists from South Korea and United States found that many such cold outbreaks happened a few months after unusually low ice levels in the Barents and Kara seas, off Russia. The study observed historical data and then conducted computer simulations. Both approaches showed the same strong link between shrinking sea ice and cold outbreaks, according to lead author Baek-Min Kim, a research scientist at the Korea Polar Research Institute. A large portion of sea ice melting is driven by man-made climate change from the burning of fossil fuels, Kim wrote in an email. Sea ice in the Arctic usually hits its low mark in September and that’s the crucial time point in terms of this study, said Mark Serreze, director of the National Snow and Ice Data Center in Boulder, Colo. Levels reached a record low in 2012 and are slightly up this year, but only temporarily, with minimum ice extent still about 40 per cent below 1970s levels, he said. Kevin Trenberth, climate analysis chief at the National Center for Atmospheric Research in Boulder, is skeptical about such connections and said he doesn’t agree with Yoon’s study. His research points more to the Pacific than the Arctic for changes in the jet stream and polar vortex behaviour, and he said Yoon’s study puts too much stock in an unusual 2012. But the study was praised by several other scientists who said it does more than show that sea ice melt affects worldwide weather, but demonstrates how it happens, with a specific mechanism. - One Canadian couple's plan to feed Swaziland's children - 40 years ago, Celsius came to Canada. Its reception? Brrrr - Germanwings co-pilot urged captain to leave cockpit - Oakville city councillor Max Khan dies - List of protests tracked by government includes vigil, ‘peace demonstration’ - One person remains in hospital after Air Canada flight’s ‘hard landing’ in Halifax - What the Leafs have left to play for in the final six games - Homeless fear Toronto plan to open suburban shelters
29
A stubborn high-pressure system is the culprit behind the dangerously high heat wave that's been baking much of the U.S., experts say. The high-pressure system—a large area of dense air—is being held in place by upper-level winds known as the jet stream. Within the system, dense air sinks and becomes warmer, and since warm air can hold more moisture than cooler air, there's also very high humidity. (Learn more about Earth's atmosphere.) Stationary high-pressure systems aren't unusual during the summer, according to Eli Jacks, a meteorologist at the National Weather Service headquarters in Silver Spring, Maryland. But what sets this system apart is its size and strength. "It's exceptionally strong and very wide, covering thousands of miles from border to border and from the Rocky Mountains to the East Coast," Jacks said. Heat Wave Part of Warming Trend? "Climate change occurs over years and decades," he said. "It's not possible to draw conclusions just because it's hot these few days." But Kerry Emanuel, a meteorologist at the Massachusetts Institute of Technology, said the current extreme heat is "happening in the context of climate warming in general." "Events like this will become more frequent." While much of the U.S. is suffering, the high-pressure system that's holding the heat in place also is keeping conditions cooler than usual in the Pacific Northwest. Rick Dittmann, meteorologist in charge of the National Weather Service office in Pocatello, Idaho, said his region has had a cooler-than-average spring and early summer. These conditions created a deeper-than-usual snowpack in the northern Rocky Mountains, he said. Heat, Humidity a Dangerous Combo In addition to the heat, the high humidity can be dangerous to human health, noted Maryland-based meteorologist Jacks. During periods of unusually high humidity, sweat doesn't act as a natural cooling agent, Jacks said. "The body can't evaporate moisture—it can't cool itself off," he said. "The body temperature actually starts rising." The heat has already been blamed for about two dozen deaths across the U.S. this week. Unfortunately, sweltering temperatures are predicted to continue their grip on the Midwest, the Northeast, and the South. For instance, Calvin Meadows, a meteorological technician at the National Weather Service office in Sterling, Virginia, said the high for Washington, D.C., is expected to reach 101 degrees Fahrenheit (38 degrees Celsius) today and 97 degrees (36 degrees Celsius) Saturday. The agency has issued an excessive heat warning from noon until 8 p.m. ET, Meadows said. Jacks, the Maryland-based meteorologist, added that the above-average temperatures would continue in much of the nation into August.
29
London/Nairobi — Keeping Average Global Temperature Rise to Below 2°C Still Achievable, with Potentially Big Cuts Possible from Buildings, Transportation and Avoided Deforestation - But Time is Running Out Action on climate change needs to be scaled-up and accelerated without delay if the world is to have a running chance of keeping a global average temperature rise below 2 degrees Celsius this century. The Emissions Gap Report, coordinated by the UN Environment Programme (UNEP) and the European Climate Foundation, and released days before the convening of the Climate Change Conference of the Parties in Doha, shows that greenhouse gas emissions levels are now around 14 per cent above where they need to be in 2020. Instead of declining, concentration of warming gases like carbon dioxide (CO2) are actually increasing in the atmosphere-up around 20 per cent since 2000. If no swift action is taken by nations, emissions are likely to be at 58 gigatonnes (Gt) in eight years' time, says the report which has involved 55 scientists from more than 20 countries. This will leave a gap that is now bigger than it was in earlier UNEP assessments of 2010 and 2011 and is in part as a result of projected economic growth in key developing economies and a phenomenon known as 'double counting' of emission offsets. Previous assessment reports have underlined that emissions need to be on average at around 44 Gt or less in 2020 to lay the path for the even bigger reductions needed at a cost that is manageable. The Emissions Gap Report 2012 points out that even if the most ambitious level of pledges and commitments were implemented by all countries-and under the strictest set of rules-there will now be a gap of 8 Gt of CO2 equivalent by 2020. This is 2 Gt higher than last year's assessment with yet another year passing by. Preliminary economic assessments, highlighted in the new report, estimate that inaction will trigger costs likely to be at least 10 to 15 per cent higher after 2020 if the needed emission reductions are delayed into the following decades. Achim Steiner, UN Under-Secretary General and UNEP Executive Director, said: "There are two realities encapsulated in this report-that bridging the gap remains do-able with existing technologies and policies; that there are many inspiring actions taking place at the national level on energy efficiency in buildings, investing in forests to avoid emissions linked with deforestation and new vehicle emissions standards alongside a remarkable growth in investment in new renewable energies worldwide, which in 2011 totaled close to US$260 billion". "Yet the sobering fact remains that a transition to a low carbon, inclusive Green Economy is happening far too slowly and the opportunity for meeting the 44 Gt target is narrowing annually," he added. "While governments work to negotiate a new international climate agreement to come into effect in 2020, they urgently need to put their foot firmly on the action pedal by fulfilling financial, technology transfer and other commitments under the UN climate convention treaties. There are also a wide range of complementary voluntary measures that can that can bridge the gap between ambition and reality now rather than later," said Mr. Steiner. The report estimates that there are potentially large emissions reductions possible-in a mid-range of 17 Gt of CO2 equivalents-from sectors such as buildings, power generation and transport that can more than bridge the gap by 2020. Meanwhile, there are abundant examples of actions at the national level in areas ranging from improved building codes to fuel standards for vehicles which, if scaled up and replicated, can also assist. Christiana Figueres, Executive Secretary of the UN Framework Convention on Climate Change, said, "This report is a reminder that time is running out, but that the technical means and the policy tools to allow the world to stay below a maximum 2 degrees Celsius are still available to governments and societies". "Governments meeting in Doha for COP18 now need to urgently implement existing decisions which will allow for a swifter transition towards a low-carbon and resilient world. This notably means amending the Kyoto Protocol, developing a clear vision of how greenhouse gases can be curbed globally before and after 2020, and completing the institutions required to help developing countries green their economies and adapt, along with defining how the long-term climate finance that developing countries need can be mobilized. In addition, governments need to urgently identify how ambition can be raised, "added Ms. Figueres. Bridging the Gap The report looked at sectors where the necessary emissions reductions may be possible by 2020. Improved energy efficiency in industry could deliver cuts of between 1.5 to 4.6 Gt of CO2 equivalent; followed by agriculture, 1.1 to 4.3 Gt; forestry 1.3 to 4.2 Gt; the power sector, 2.2 to 3.9 Gt; buildings 1.4 to 2.9 Gt; transportation including shipping and aviation 1.7 to 2.5 Gt and the waste sector around 0.8 Gt. The report points out that some sectors have even bigger potential over the long term-boosting the energy efficiency of buildings, for example, could deliver average reductions of around 2.1 Gt by 2020 but cuts of over 9Gt CO? equivalent by 2050. "This implies that by 2050 the building sector could consume 30 per cent less electricity compared to 2005 despite a close to 130 per cent projected increase in built floor area over the same period," it says. The report concludes that if this is to happen, "state of the art building codes may need to become mandatory in the next 10 years in all of the major economies such as the United States, India, China and the European Union". Further emission reductions are possible from more energy efficient appliances and lighting systems. The report cites Japan's Top Runner Programme and the Ecodesign Directive of the European Union which have triggered household electricity consumption savings of 11 per cent and 16 per cent respectively. It also cites Ghana's standards and labelling programme for air conditioners which is set to save consumers and businesses an estimated US$64 million annually in reduced energy bills and around 2.8 million tonnes of CO? equivalent over 30 years. Potential emissions reductions from the transportation sector are assessed at 2 Gt of CO? equivalent by 2020. The report notes that there is already a shift with the eight biggest multilateral development banks at the recent Rio+20 Summit pledging US$175 billion over the next decade for measures such as bus rapid transport systems. The report recommends the "Avoid, Shift and Improve' polices and measures that encourage improved land planning and alternative mobility options such as buses, cycling and walking above the private car alongside better use of rail freight and inland waterways. Combinations of improved vehicle standards and scrappage schemes for old vehicles can also assist. The report says approved and proposed new standards in seven countries ranging from Australia and China to the European Union, the Republic of Korea and the United States are expected to reduce fuel consumption and greenhouse gas emissions of new light-duty vehicles by over 50 per cent by 2025 from 2000 levels. "Although it remained under-utilized, "avoided deforestation" is considered a low cost greenhouse gas emissions reductions option," says the report. Policies to assist in reducing deforestation and, thus, greenhouse gas emissions, include establishing protected areas such as national parks to economic instruments such as taxes, subsidies and payments for ecosystem services. The report cites Brazil where a combination of conservation policies allied to falls in agricultural commodity prices has led to a decrease in deforestation by three quarters since 2004 avoiding 2.8 Gt of CO? equivalent between 2006 and 2011. Protected areas in Costa Rica now represent over as fifth of its territory, reducing greenhouse gas emissions and triggering a rise in tourists from just under 390,000 in 1988 to 2.5 million in 2008: tourism now accounts for around 15 per cent of GDP. These actions by Brazil and Costa Rica predate Reduced Emissions from Deforestation and forest Degradation (REDD or REDD+) policies under the UN Convention for Combating Climate Change. The report indicates that scaled-up action under, for example, the UN-REDD initiative which is working with over 40 countries, can provide even larger emission reductions while generating additional benefits such as jobs in natural resource management.
29
Tropospheric Chemistry: Measurements Tropospheric chemical research relies on measurements of the chemical and physical processes of the troposphere, particularly the effects of pollution on those processes. These measurements are obtained from mobile and ground-based platforms during coordinated field projects including SENEX 2013, CalNex 2010, ARCPAC 2008, ICEALOT 2008, TexAQS 2006, NEAQS-ITCT (under ICARTT) 2004, ITCT 2002, NEAQS 2002, TexAQS 2000, and SOS 1999. In addition, we also have information about field projects before 1999. Climate Change, Air Quality, Air Pollution Our field experiments have been designed to address components of these major topics by making measurements of chemical species, aerosol size and composition, as well as solar radiation using an extensive suite of instruments. These measurements are made by scientists from CSD and our collaborators at other institutions. Here you can find this data from all of our major field campaigns. Please note data access may require authentication. Resources for investigators include Data (Igor) Tools (authentication required) and ICARTT Data Format information. Use the faceted datasets search tool to search data across all major projects. Provided for convenience are also datasets for modellers. Field projects begun after 1999 are identified below. Mouse over for general information, or select a project for detailed information and data. Alternatively, view a table of field missions. CSD Mobile Lab FRAPPÉ / DISCOVER-AQ 2014
29
Elvia Thompson/Etta Pagani Goddard Space Flight Center, Greenbelt, Md. April 15, 2004 Satellites Record Weakening North Atlantic Current A North Atlantic Ocean circulation system weakened considerably in the late 1990s, compared to the 1970s and 1980s, according to a NASA study. Sirpa Hakkinen, lead author and researcher at NASA's Goddard Space Flight Center, Greenbelt, Md. and co-author Peter Rhines, an oceanographer at the University of Washington, Seattle, believe slowing of this ocean current is an indication of dramatic changes in the North Atlantic Ocean climate. The study's results about the system that moves water in a counterclockwise pattern from Ireland to Labrador were published on the Internet by the journal Science on the Science Express Web site at: The current, known as the sub polar gyre, has weakened in the past in connection with certain phases of a large-scale atmospheric pressure system known as the North Atlantic Oscillation (NAO). But the NAO has switched phases twice in the 1990s, while the subpolar gyre current has continued to weaken. Whether the trend is part of a natural cycle or the result of other factors related to global warming is unknown. "It is a signal of large climate variability in the high latitudes," Hakkinen said. "If this trend continues, it could indicate reorganization of the ocean climate system, perhaps with changes in the whole climate system, but we need another good five to 10 years to say something like that is happening." Rhines said, "The sub polar zone of the Earth is a key site for studying the climate. It's like Grand Central Station there, as many of the major ocean water masses pass through from the Arctic and from warmer latitudes. They are modified in this basin. Computer models have shown the slowing and speeding up of the subpolar gyre can influence the entire ocean circulation system." Satellite data makes it possible to view the gyre over the entire North Atlantic basin. Measurements from deep in the ocean, using buoys, ships and new autonomous "robot" Seagliders, are important for validating and extending the satellite data. Sea-surface height satellite data came from NASA's Seasat (July, August 1978), U.S. Navy's Geosat (1985 to 1988), and the European Space Agency's European Remote Sensing Satellite1/2 and NASA's TOPEX/Poseidon (1992 to present). Hakkinen and Rhines were able to reference earlier data to TOPEX/Poseidon data, and translate the satellite sea-surface height data to velocities of the subpolar gyre. The sub-polar gyre can take 20 years to complete its route. Warm water runs northward through the Gulf Stream, past Ireland, before it turns westward near Iceland and the tip of Greenland. The current loses heat to the atmosphere as it moves north. Westerly winds pick up that lost heat, creating warmer, milder European winters. After frigid Labrador Sea winters, the water in the current becomes cold, salty and dense, plunges beneath the surface, and heads slowly southward back to the equator. The cycle is sensitive to the paths of winter storms and to the buoyant fresh water from glacial melting and precipitation, all of which are experiencing great change. While previous studies have proposed winds resulting from the NAO have influenced the subpolar gyre's currents, this study found heat exchanges from the ocean to the atmosphere may be playing a bigger role in the weakening current. Using Topex/Poseidon sea-surface height data, the researchers inferred Labrador Sea water in the core of the gyre warmed during the 1990s. This warming reduces the contrast with water from warmer southern latitudes, which is part of the driving force for ocean circulation. The joint NASA-CNES (French Space Agency) Topex/Poseidon oceanography satellite provides high-precision data on the height of the world's ocean surfaces, a key measure of ocean circulation and heat storage in the ocean. NASA's Earth Science Enterprise is dedicated to understanding the Earth as an integrated system and applying Earth System Science to improve prediction of climate, weather and natural hazards using the unique vantage point of space. NASA, the National Oceanic and Atmospheric Administration, and the National Science Foundation funded the study. For more information and images from the study on the Internet, visit: - end - text-only version of this release NASA press releases and other information are available automatically by sending a blank e-mail message to To unsubscribe from this mailing list, send a blank e-mail message to Back to NASA Newsroom | Back to NASA Homepage
29
Zurawlow and Chevron You might miss Zurawlow as you drive the narrow roads that weave through the gentle rolling hills of Southeastern Poland. A small sign is all that indicates a small town is just up ahead, along a dirt and gravel road. There are two other signs at this fork in the road. One reads, "Chevron: We don’t want gas." The sign has a double meaning; it refers to the gas chambers used in Poland during the Holocaust and to the natural gas to be extracted through unconventional shale drilling. The other sign reads, “Yesterday Chernobyl, Today Chevron". It refers to the nuclear reactor disaster in neighboring Ukraine. Residents fear similar dangers may be lurking in their future. Similar signs hang on the house of Andrezj Bak, a civil engineer, who lives and works in a nearby city but spends weekends at his home in this rural mixed-income community. He grows rows of food crops; he drinks water from an underground well; and he is friendly with his neighbors. Bak keeps a thick binder containing Poland’s environmental laws, European Union pollution regulations, newspaper and magazine clippings, and correspondence he has had with Chevron. "Nothing protects people who live here, nothing will protect them from pollution, from destruction of the area, and also our legal system allows them to be removed from the land," he said. Poland’s Geological and Mining Law was implemented this year. Similar to U.S. eminent domain laws, the mining law allows the state to seize land sitting on shale gas deposits for industrial purposes. That worries people like Bak, who says companies like Chevron already have too much power in a nation yearning to develop a natural gas industry but without a legal structure to regulate it. He reads stories of dangerous methane releases and poor regulation in the United States as cautionary tales of what could happen in Poland. "We just started reading and looking for things on the internet and talking to lawyers and specialists. When we first learned of this, we were in favor of gas -- in favor of development, a chance to give Poland a new fuel, a new energy source, until we read about it in depth and until we realized that there are threats related to it. Basically, the more information we got and the more contacts we started making, the more we became convinced that this is not the right thing to do here," he said. Bak’s neighbor, Marek Bernard, a wheat and dairy farmer, feels the same way. "I think it is just impossible to do it here. We have no interest in it; we don’t want it because the gas will go somewhere else. Someone else will have profits; we will have no advantages from it, no positive results from it," he said. Landowners in Poland do not own the rights to minerals on their property; the state does. Chevron had government approval to conduct a test drill for shale gas in Zurawlow earlier this year. Another local farmer, Wieslaw Gryn, compares Chevron's arrival to a western movie. "Chevron came here like John Wayne into a saloon. They just organized a meeting a week before they wanted to start the ground works. They first signed the agreements and then informed people what was going to happen here," he said. Gryn runs a large 1800-acre farm in Zurawlow. His family has worked this land for the last century. After collective Soviet-style agriculture collapsed, he bought as many acres as he could afford to increase his business. He now sells his farm products internationally – to Europe, the US and The Middle East. When Chevron came into town, Gryn and his neighbors decided they weren’t going to let the intruders get started. Residents blocked the road and called the police. Although Chevron had the paperwork to begin drilling, the townspeople found a loophole to stop it - flying right over their heads. Poland is a birder’s paradise with nearly 500 species of birds, many of them rare. The country’s laws dictate that during bird breeding season, which starts in March and lasts several months, nothing can be done on the ground or in the air that would interfere with the birds' habits and habitats. The locals used that exception to prevent Chevron from drilling. So the company picked up and left, without exploring for shale gas. "At this time, we have no immediate operational plans at Zurawlow, although we remain open for future operations," said Grazyna Bukowska, the spokesperson for Chevron Polska Energy. She says Chevron representatives met with residents in eight communities – seven of which supported their operations. She said only the meeting in Zurawlow was interrupted by protesters, as Chevron representatives tried to discuss their plans for groundwater and soil management and play up economic opportunities for the town. The protesters said they found it insulting that the company came to town with toys for the children but declined to answer the grownups’ questions. In surveys Chevron has conducted, Bukowska said, respondents are generally in favor of what Chevron is doing, or plans to do. She reiterates that the driller had the proper paperwork to proceed in Zurawlow. "Chevron had all the permits, but what's the reason to do such action? I mean..." She resumed after a long pause, "sorry, I have to stick to my statement which I stated before. Chevron had all the permits which allowed us to enter the site," said Bukowska. Civil engineer Bak and his neighbors are looking for ways, other than avian protection, to keep Chevron out. He says the weight of the company's trucks exceeds the legal limit on the roads. The townspeople have filed complaints about that with authorities. Martin Zieba from the Polish Oil and Gas Group says communities that welcome drilling should be allowed to permit it. He says the fears of environmental contamination are unwarranted and are based on a lack of information about the fracturing process and gas extraction. Although Chevron may decide not to proceed in Zurawlow, it has already leased the land it wants. The property is owned by Janusz Katek. Katek would not disclose how much he was being paid -- or for how long. But he concedes he cannot farm for two years on the land Chevron has leased. He says he needs to take advantage of this financial opportunity. Compared to his neighbors, Katek lives in a run-down house, with far less acreage. "I don’t work with this industry, so I don’t know details. I don’t know if this is good or bad. I believe if the government is giving them concessions and allowances to drill [then] those people know what they are doing, and that is why they are sitting there in all those ministries and offices, and they are supposed to know whether this is a safe thing or not. How can I know such a thing? If they receive permits, then I know that this is something that is not harmful," he said. Katek says until Chevron drills, he won’t know whether he made a good or bad decision. "I still talk to people like I used to. We are still neighbors, but I don’t believe in all they say because we were together in the municipality and the district offices, and the officials there said perhaps there is some other company that is paying them [the upset neighbors], perhaps it is Gazprom, a Russian company that is paying them because they [Gazprom] don’t want this drilling to be going on here," he said. One of those officials is Piotr Wozniak, Poland’s Chief Geologist and the Deputy Minister for the Environment. He contends the type of shale and its depth -- the very things that make it difficult and expensive to drill in Poland -- ensure the process will be safe, especially when the investment is $15 million dollars per well. "When thinking and talking about water contamination, it's most unlikely. In Poland we have our shales located at least three kilometers located below the surface or more, so it’s like the distance from here to the closest Metro station," he said. Wozniak says even without the depth issue, fears of environmental damage remain unfounded. "I don’t know of any proved case of contaminated water due to fracking in the United States. Again, I haven’t heard of any proved case of contaminated water due to fracking in the United States. So we defer opinions on this," he said with a laugh. That’s not true, and the residents of Zurawlow, many of whom spend hours researching online, know better. They read tales of American farmers who have had their cattle die and their farms lose value. And they worry about where the water for the extraction process will come from and where the waste will be discarded.
29
Indoor air quality - Bulgarian (bg) - Czech (cs) - Danish (da) - German (de) - Greek (el) - English (en) - Spanish (es) - Estonian (et) - Finnish (fi) - French (fr) - Irish Gaelic (ga) - Croatian (hr) - Hungarian (hu) - Icelandic (is) - Italian (it) - Lithuanian (lt) - Latvian (lv) - Maltese (mt) - Dutch (nl) - Norwegian (no) - Polish (pl) - Portuguese (pt) - Romanian (ro) - Slovak (sk) - Slovenian (sl) - Swedish (sv) - Turkish (tr) Image © Jose AS Reyes | Shutterstock It may come as a surprise to many of us that the air in an urban street with average traffic might actually be cleaner than the air in your living room. Recent studies indicate that some harmful air pollutants can exist in higher concentrations in indoor spaces than outdoors. In the past, indoor air pollution received significantly less attention than outdoor air pollution, especially outdoor air pollution from industrial and transport emissions. However, in recent years the threats posed by exposure to indoor air pollution have become more apparent. Imagine a newly painted house, decorated with new furniture… Or a workplace filled with a heavy smell of cleaning products… The quality of air in our homes, work places or other public spaces varies considerably, depending on the material used to build it, to clean it, and the purpose of the room, as well as the way we use and ventilate it. Poor air quality indoors can be especially harmful to vulnerable groups such as children, the elderly, and those with cardiovascular and chronic respiratory diseases such as asthma. Some of the main indoor air pollutants include radon (a radioactive gas formed in the soil), tobacco smoke, gases or particles from burning fuels, chemicals, and allergens. Carbon monoxide, nitrogen dioxides, particles, and volatile organic compounds can be found both outdoors and indoors. Policy measures can help Some indoor air pollutants and their health impacts are better known and receive more public attention than others. Smoking bans in public spaces is one of them. In many countries, smoking bans in various public places were quite controversial before relevant legislation was introduced. For example, within days of the entry into force of the smoking ban in Spain in January 2006, there was a growing movement to assert what many considered their right to smoke in indoor public places. But the ban has also led to greater public awareness. In the days following its entry into force, 25 000 Spaniards per day sought medical advice on how to quit smoking. Much has changed in public perception when it comes to smoking in public places and on public transport. Many airlines started to ban smoking on short-haul flights in the 1980s, followed by long-haul ones in the 1990s. It is now unthinkable in Europe to allow non‑smokers to be exposed to second-hand smoke on public transport. Today many countries, including all the EEA countries, have some legislation to limit or ban indoor smoking in public places. After a series of non-binding resolutions and recommendations, the European Union also adopted in 2009 a resolution calling on EU Member States to enact and implement laws to fully protect their citizens from exposure to environmental tobacco smoke. Smoking bans appear to have improved indoor air quality. Environmental tobacco smoke pollutants are declining in public places. In the Republic of Ireland, for example, measurements of air pollutants in public places in Dublin before and after the introduction of a smoking ban showed decreases of up to 88 % for some air pollutants found in environmental tobacco smoke. As in the case of outdoor pollutants, the impacts of indoor air pollutants are not limited to our health only. They also come with high economic costs. Exposure to environmental tobacco smoke in EU workplaces alone is estimated at over EUR 1.3 billion in direct medical costs, and over EUR 1.1 billion in indirect costs linked to productivity losses in 2008. Indoor pollution is much more than tobacco smoke Smoking is not the only source of indoor air pollution. According to Erik Lebret from the National Institute for Public Health and the Environment (RIVM) in the Netherlands, ‘Air pollution does not stop at our doorsteps. Most outdoor pollutants penetrate into our homes, where we spend most of our time. The quality of indoor air is affected by many other factors, including cooking, wood stoves, burning candles or incense, the use of consumer products like waxes and polishes for cleaning surfaces, building materials like formaldehyde in plywood, and flame retardants in many materials. Then there is also radon coming from soils and building materials.’ European countries are trying to tackle some of these sources of indoor air pollution. According to Lebret, ‘we are trying to substitute more toxic substances with less toxic substances or to find processes that reduce emissions, as in the case of formaldehyde emissions from plywood. Another example can be seen with the reduction of certain radon-emitting materials used in wall construction. These materials were used in the past but their use has since been restricted.’ Passing laws is not the only way to improve the quality of the air we breathe; we can all take steps to control and reduce airborne particles and chemicals in indoor spaces. Small actions such as ventilating enclosed spaces can help improve the quality of the air around us. But some of our well-intended actions might actually have adverse effects. Lebret suggests: ‘We should ventilate, but we should not over ventilate as this is a substantial loss of energy. It leads to more heating and use of fossil fuels, and consequently means more air pollution. We should think of it as making more sensible use of our resources in general.’ For references, please go to www.eea.europa.eu/soer or scan the QR code. This briefing is part of the EEA's report The European Environment - State and Outlook 2015. The EEA is an official agency of the EU, tasked with providing information on Europe’s environment. PDF generated on 28 Mar 2015, 04:52 PM
29
- Category: Water Power 25 May 2012 - Published on Friday, 25 May 2012 11:22 - Hits (1889) The medium-scale wave energy power device in the Black Sea developed by Israeli firm Eco Wave Power which was completed in April this year is now in full operation. Following the installation of the power plant, the company did some tests. They examined the characteristics of the two different floater shapes used – the “Wave Clapper” and “Power Wing” – and stress-tested the floaters under stormy conditions. Eco Wave’s device draw energy from wave power using buoys that rise and fall with the “up and down motion, lifting force, change of water level, hydraulic air lock and incident flux of waves.” Wave height in the area reached as high as 5 meters on April 18 and 19, but the tested floaters survived the test without damages, the company said. The power output in different wave heights and periods and the influence of side waves on the floaters and connections were measured. The floaters were also connected to electric devices to demonstrate electricity supply, and the option to unite both floaters to one electric grid and charge a common accumulator was explored. During the tests, Eco Wave Power found that two medium-scale wave energy devices can power six to 10 households. “Now, imagine what a hundred commercial scale floaters could do,” the company’s press release read. The next phase will be to demonstrate the device’s flexibility in terms of connecting to almost any ocean structure, by moving it to a structurally different coastal area. Right after this, the company will proceed to building a commercial-scale sea wave power plant.
29
President Obama will unveil a rule Monday intended to confront climate change by cutting carbon dioxide emissions from power plants, the nation's greatest source of the heat-trapping gas. Obama plans to bypass Congress and use his authority under the Clean Air Act to achieve greenhouse gas reductions. Power generation accounts for about 40% of such emissions. The 3,000-page rule is expected to spark lawsuits, claims of job losses and charges by critics that Obama has launched a new "war on coal." In some coal-reliant states, however, power companies and regulators are expected to take a more pragmatic approach, planning for a future they assume will include carbon dioxide limits. "Carbon policy is going to impact our business, and we have to be prepared for that," said Robert C. Flexon, chief executive of Houston-based Dynegy. "It can be a threat or an opportunity. I'd rather make it an opportunity." Which approach prevails - a legal fight or a political compromise - will help determine how quickly the U.S. will begin to reduce its greenhouse gas emissions. As it seeks to reduce pollution, the administration must ensure that electricity supplies remain reliable and consumer rates do not increase significantly. Some potential approaches are on display in Illinois, which relies heavily on coal, including nine plants operated by Dynegy. Additional power comes from nuclear plants and renewable sources, especially wind. Although some older coal-fired plants have closed, power executives, regulators and some environmentalists say many need to keep running for now, although at less capacity. The reduced output could be made up through energy efficiency and renewable power, they say. "We're pretty consistent with what you're hearing from other states, that you can't have a one-size-fits-all approach, but a suite of tools instead to use to cut emissions," said Lisa Bonnett, director of the Illinois Environmental Protection Agency. Much of the wrangling over the new rule will probably center on its stringency: What baseline will be used to determine how much states have to reduce their emissions? Will states have different standards to meet depending on how much coal generation they have? Will states get credit for cuts they already have made to emissions? The Obama administration wants the rule in place by the end of 2016, just before the president leaves office, but given the likelihood of legal challenges, when the cuts might take effect is unclear. In the past, the federal Environmental Protection Agency has ordered individual power plants to cut specific pollutants by set amounts. But that doesn't work for carbon dioxide because the technology that would allow coal plants to cut those emissions is not currently cost-effective. Instead, the EPA is expected to propose a rule that sets overall pollution reduction targets for states and gives them considerable flexibility on how to meet those goals. In effect, the rule would enact some features of the so-called cap-and-trade plan that passed the House early in Obama's first term but died in the Senate. States would have an overall ceiling on the amount of greenhouse gases their power plants could emit - the cap. They could allow utility companies to trade in the hope of finding efficient, low-cost ways to achieve those goals. Many energy companies have a mix of plants that use different fuels, and some could run cleaner units powered by natural gas or wind and reduce the use of coal-fired generators. For the gigantic Prairie State plant in the southern Illinois town of Marissa, however, coal is all there is. The largest U.S. coal plant built in the last three decades, Prairie State was erected in 2012 at the mouth of a coal mine by a consortium of utilities from several states. Its 14-story generation complex can produce 1,600 megawatts of power to serve about 2.5 million customers. Thanks to $1 billion in technology, it emits less pollution, including mercury and sulfur dioxide, than other coal plants. Still, Prairie State's carbon dioxide output is greater than 90% of the country's power plants, according to EPA data, and it cannot cut emissions enough to rival cleaner electricity generation. The power it generates is more expensive than electricity from some natural gas plants, a gap that has generated complaints from communities that buy its output. Plant executives have met with EPA officials on several occasions to argue for more time, said Ashlie Kuehn, Prairie State's general counsel. The company has considered several options to offset the plant's emissions. "Do we install solar panels in our parking lot? Plant trees? Do we partner with a renewables company?" Kuehn asked. "I'm confident EPA heard our concerns. But we're on pins and needles." Chicago-based Exelon owns the state's six nuclear plants, which do not emit greenhouse gases. Utility officials have considered closing as many as five of the plants, however, because electricity prices make recouping the cost of the reactors impossible. A rule that would limit coal-generated electricity would help Exelon's bottom line. "There's a lot of talk about how greenhouse gas rules would negatively affect coal plants, and that's true," said Joseph Dominguez, senior vice president of regulatory affairs at Exelon. "At the same time, not having greenhouse gas rules negatively affects the expansion of clean energy. The rule could help us. Right now, nuclear isn't compensated for its zero-emission profile." Environmentalists like Howard Learner, executive director of the Environmental Law & Policy Center, a Chicago-based advocacy group, hope the rule will foster greater energy efficiency and renewable energy use. In 2007, Illinois passed a law to require power companies to reduce electricity consumption by 2% every year through energy efficiency programs and incentives. "I think the focus of a new rule can't just be about coal as the bad guy," said Anne Evens, chief executive of Elevate Energy, a Chicago-based nonprofit that improves energy efficiency in affordable housing and other large buildings. In many cities, she said, energy consumption from housing accounts for two-thirds of greenhouse gas emissions. The problem is especially acute in parts of the Midwest, where older housing is more common. Other potential offsets to coal emissions are already taking root because of a state law, similar to those in several other states, that calls for 25% of Illinois' power to come from renewable energy by 2025. Along a 17-mile stretch in central Illinois, 240 wind turbine towers rise from corn and soybean fields by the towns of Ellsworth, Arrowsmith and Saybrook. The strong winds that blow through McLean County all winter drew Houston-based EDP Renewables to the area eight years ago. The new greenhouse gas rule could prompt the company and others to build more wind turbines. Sitting in an office at the back of his Doyle Oil shop in Ellsworth, Jack Doyle, 85, said he isn't up to speed on the power plant rule, but if he could lease more of his land for wind turbines, he would. He researched the issue on vacation in California, sneaking through a fence into a wind farm and talking to an employee. Now, Doyle has seven turbines on this land and half of one that straddles a neighbor's land. "I don't know about all that stuff in Washington," Doyle said. "But the wind is up there doing nothing, so why not use it to make electricity?"Copyright © 2015, Los Angeles Times
29
Latest Ice core Stories Researchers here have used sediment from the deep ocean bottom to reconstruct a record of ancient climate that dates back more than the last half-million years. When the climate warmed relatively quickly about 14,700 years ago, seasonal monsoons moved southward, dropping more rain on the Earth's oceans at the expense of tropical areas, according to climate researchers. At times in the distant past, an abrupt change in climate has been associated with a shift of seasonal monsoons to the south, a new study concludes, causing more rain to fall over the oceans than in the Earth's tropical regions, and leading to a dramatic drop in global vegetation growth. Dust trapped deep in Antarctic ice sheets is helping scientists unravel details of past climate change. The Antarctic Peninsula juts into the Southern Ocean, reaching farther north than any other part of the continent. The southernmost reach of global warming was believed to be limited to this narrow strip of land, while the rest of the continent was presumed to be cooling or stable. Scientists are now making an alarming claim that the earth is on the brink of entering another Ice Age that could last the next 100,000 years. New research indicates that the ocean could rise in the next 100 years to a meter higher than the current sea level – which is three times higher than predictions from the UN's Intergovernmental Panel on Climate Change, IPCC. Climate researchers have shown that big volcanic eruptions over the past 450 years have temporarily cooled weather in the tropics — but suggest that such effects may have been masked in the 20th century by rising global temperatures. Cooperative agreements signed with teams from the University of Wisconsin, Dartmouth College, University of New Hampshire are vital to climate studies - To play, gamble. - To impose upon; delude; trick; humbug; also, to joke; chaff. - A deceitful game or trick; trickery; humbug; nonsense.
29
An advanced processing technology being pioneered at UNSW to improve the efficiency of first generation silicon solar cells has turned two of the world's leading solar manufacturers into unlikely collaborators. The School of Photovoltaics and Renewable Energy Engineering (SPREE) has signed a new collaborative research agreement with Suntech Power and Hanwha Solar, the first such agreement between the school and two competing companies. Both manufacturers are interested in an experimental technology whereby tiny metal contact regions can be "self-patterned" into a solar cell's electric insulator, which rests between the silicon wafer and the aluminium back-plate. "Currently closely-spaced small-area metal contact regions in an insulating layer can only be formed by deliberately patterning the holes with a laser scanning over the surface, which is quite slow," says Dr Alison Lennon, a senior lecturer from SPREE. "Other methods, such as aerosol and ink-jet printing, have been explored, however currently these methods are currently too slow and have not been able to demonstrate the required patterning reliability." Taking cues from the metals processing industry, Lennon and her PhD students are investigating a radical approach to automate and quicken this patterning using aluminium anodisation, a well-understood process where a chemical coating is formed on a metal surface to protect against corrosion. "When you anodise aluminium you can create a porous insulating layer," says Lennon. "This means we can effectively turn an aluminium layer on a silicon solar cell into a dielectric layer with lots of little holes, which is exactly what we want." The UNSW team has made prototypes of cells using this technique. They are now working on understanding how the metal contacts form in order to improve cell efficiencies, and refining the technique so it can produce competitive results on an industrial scale. "We need to make the process robust, with predictable high efficiencies for manufacturers, and we need to make it cost-effective," says Lennon. Lennon, who helped broker the collaborative research agreement, says this is an example of two companies realising they can achieve more as partners than as competitors, and says their support could open the door for faster commercialisation. "Both Hanwha and Suntech operate high-volume solar manufacturing plants, and both are within the top 10 silicon solar cell manufacturers in the world. So if we can demonstrate the viability of this technology, we are both in a position to move the technology into manufacture relatively quickly," noted Dr Paul Basore and Dr Renate Egan, the Advanced R&D Directors for Hanwha Solar and Suntech Power, respectively. Explore further: Blue Freedom uses power of flowing water to charge
29
Latest Sea ice Stories A growing number of studies have pegged global warming and climate change as a cause of sea ice decline in recent decades. However, a newly published study in the journal Nature Geoscience is showing a vastly different scenario. Have you been experiencing the coldest spring weather in recent memory? Scientists say it may be due to global warming and rapidly melting sea ice in the Arctic. NASA has kickstarted another season of science flights over Greenland to perform research activity with Arctic ice sheets and sea ice. Recent climate induced changes to Arctic polar bears’ environment is affecting their habits and ability to survive, with the bears having to rely more and more on internal fat reserves New observations using satellites have confirmed University of Washington researchers' analysis the Arctic Ocean sea ice really is thinning. Researchers are disputing the theory that the culprit behind the historic sea ice minimum was "The Great Arctic Cyclone of August 2012." Melt ponds, a favorite phenomenon among arctic photographers, are turquoise or dark blue pools of water that appear on ice floes during the Arctic summer. Shrinking Arctic sea ice grabbed the world's attention again earlier this year with a new record low minimum. Growing economic activity in the Arctic, such as fishing, mineral exploration and shipping, is emphasizing the need for accurate predictions of how much of the Arctic will be covered by sea ice. The intense, small scale storms known as polar lows could have a tremendous impact on oceanic water circulation and climate predictions, a team of researchers from the US and UK have discovered. The Arctic Ocean which is located in the Northern Hemisphere and mostly in the Arctic north polar region, is the shallowest and smallest of the world’s five major oceanic divisions. The International Hydrographic Organization recognizes it as an ocean, although, some oceanographers consider it as the Arctic Mediterranean Sea or simply, the Arctic Sea, classifying it a Mediterranean sea or an estuary of the Atlantic Ocean. Alternatively, the Arctic Ocean can be considered as the northernmost...
29
Multimedia intern, Executive Office of Energy and Environmental Affairs (EEA) View Madeleine's Bio There is a lot of climate science research going on at the University of Massachusetts – Amherst. Researchers at the Climate System Research Center (CSC) at the University of Massachusetts-Amherst are studying the changes in global climate and how best to deal with these challenges. The facility recently received a $7.5 million dollar grant from the federal government to continue and expand this work. At CSC, graduate students, post-docs, university faculty and scientists collaborate on studies that involve glaciological and meteorological observations, recovery and analysis of paleoclimatic archives, and climate scenario modeling. These are worth looking up in the dictionary! The earth’s climate is undergoing rapid changes due to a period of natural climatic shift that is amplified exponentially by our own pollution. At a Boston unveiling of another climate center housed at UMass-Amherst, the Northeast Climate Science Center(NECSC), Richard Palmer, Department Head Civil and Environmental Engineering, noted that just this year the Commonwealth has suffered from weather troubles including disastrous flooding, an early snowfall on fully-leafed trees, a tornado, and an earthquake. The NECSC accepts the inevitability of increased variability and severity in weather, a shift in the seasons, and a rise in sea level, and will use its information, tools and techniques to best forecast and manage these events. Energy and Environmental Affairs Secretary Rick Sullivan, who attended the NECSC launch, noted the extent to which this facility of innovation and research can impact the Commonwealth. The Climate Science Center will be one of eight in the country, but serves the largest region with the most people. What is learned at and shared by the Center will inform Massachusetts as it works toward its commitment to reduce greenhouse gas emissions by 25 percent by 2020. Interested in climate change and the work being done at the Northeast Climate Science Center? Check out this video below. Dam Ice posted on Mar 12 You may have noticed many “falling ice” signs around town. Personally, I recently counted five of them on my way to the coffee shop. The icicles and falling ice are actually caused by ice dams, and the Building Science Corporation (BSC) and Massachusetts Department of …Continue Reading Dam Ice Fish Need Clean Energy, Too posted on Feb 18 Running a fish farm is an intense operation, one that requires a lot of labor and a large amount of energy. Currently, the McLaughlin Hatchery uses a significant amount of oil to heat its facility. The facility is going to replace its oil furnace with a renewable energy heating system, a new high efficiency wood pellet boiler and pellet storage silo that will reduce greenhouse gas emissions by almost 92 percent, save an estimated $11,432 annually, and reduce annual oil use by more than 5,000 gallons. Wood Pellets are the New Oil for Regional Schools Reducing Fuel Costs posted on Feb 12 Did you know that it is possible to heat buildings in the northeast using wood biomass, a renewable energy fuel? With nearly one-third of total energy costs going toward heating our buildings, it is no wonder that Massachusetts school districts are searching for cheaper and …Continue Reading Wood Pellets are the New Oil for Regional Schools Reducing Fuel Costs
29
Temperatures may be rising more slowly than expected because of two natural oceanic cycles − the latest refutation of the global warming “pause”. LONDON, 1 March, 2015 − US scientists have suggested yet another explanation for the so-called pause in global warming. They think it might all be down to the juxtaposition of two independent natural climate cycles – each with periods of half a century or more – one of which is blowing cold, and the other not very hot. Between them, the phenomena known to meteorologists as the Atlantic Multidecadal Oscillation and the Pacific Decadal Oscillation could account for the seeming slowdown in predicted temperature rises. Any pause or hiatus in global warming is only apparent: in fact, 14 of the warmest years on record have happened in the last 15 years and 2014 was scored separately, by the World Meteorological Organisation, the US National Oceanic and Atmospheric Administration, and the US space agency Nasa, as the warmest on record. But overall, the palpable increases in average temperatures per decade recorded in the last 30 years of the 20th century have not been maintained, and climate scientists and meteorologists have been trying to work out why. The latest proposal is from Byron Steinman, a geologist at the University of Minnesota Duluth, and Michael Mann and Sonya Miller of Pennsylvania State University. Professor Mann is the scientist who, much to the fury of people who deny climate change, first formulated the famous “hockey-stick graph” which highlights the magnitude of change that threatens to overtake global climate as greenhouse gas levels rise because of human activity. They report in Science that the northern hemisphere is warming more slowly, not because of the Atlantic oscillation, which has been relatively flat, but because of a second, different but still natural downward trend in the Pacific cycle. This is not the only explanation on the table. In the past two years Climate News Network has reported that climate scientists certainly expected a slowdown, but just not right now; or that planetary measurements might be incomplete or misleading; or that even though average levels were down, this masked a series of hotter extremes. The oceans have certainly been under suspicion. One group has already identified the cooling Pacific as a damper on global warming. Another has suggested that in fact the “missing heat” is collecting in the Atlantic depths. Yet another has questioned the role of the trade winds, while still another has pointed to an upswing in volcanic activity that could have delivered a fine smear of sunblock aerosols to the atmosphere. “The North Atlantic and North Pacific Oceans appear to be drivers of substantial natural… climate variability on timescales of decades” Any or all of these could have some role in the big picture. The climate would vary anyway, and the question in every case is: how much would any or all natural variation affect the overall path of change because of increasing carbon dioxide levels in the atmosphere? The latest study is based on sophisticated climate models that match the predicted impact of the great ocean-atmosphere cycles with the pattern of climate shifts recorded in the past. “We know that it is important to distinguish between human-caused and natural climate variability so we can assess the impact of human-caused climate change, including drought and weather extremes,” Professor Mann said. “The North Atlantic and North Pacific Oceans appear to be drivers of substantial natural, internal climate variability on timescales of decades.” – Climate News Network
29