Saturday, August 31, 2019

Level 2 Childcare

MU 2. 4- Contribute to children and young people’s health and safety 1- 1. 1 Outline the health and safety policies and procedures of the work setting Nursery policies -babysitting policy -behaviour management policy -confidentiality policy -display policy -equality and diversity policy -key carer policy -no smoking policy -partnership working policy -pet care policy -safeguarding children policy -sick child policy -special educational needs policy -staff personal training policy -training policy -use of cameras and photography policy -water and milk policyNursery procedures -accident/incident reports procedure -complaints procedure -procedures for supporting children speaking EAL -medicine records procedure -misplaced child procedure -napping changing procedure -nursery evacuation procedure -parental responsibility verification procedure -register completion procedure -guidelines to support children, families and staff in the event of a parental separation -significant incide nt reporting procedure -suitable person and clearance procedure -transition to school procedure -uncollected child procedure updating child information procedure Appendices -behaviour incident form -child incident record -training request form -weekly staff register -my day sheet MU 2. 4- Contribute to children and young people’s health and safety 1- 1. 2 Identify the lines of responsibility and reporting for health and safety in the work setting In a work setting it is every employee’s responsibility for health and safety. If I were to notice something that would affect someone’s health and safety then this is when I would report to my room leader and a senior member of staff at the work setting.For example when I am working in the nursery and I see a child hurt themselves I report to the room leader or a senior member of staff to inform them of what has happened. -Each employee is responsible for health and safety -Report to a room leader or senior member of s taff when a child’s health and safety is affected. MU 2. 4- Contribute to children and young people’s health and safety 1-1. 3 Explain what risk assessment is and how this is managed in the work setting A risk assessment is an important step in protecting a work place and staff which are working there and this also complies with the law.When doing a risk assessment it helps you focus on the risks that there are in the workplace and the risks which could potentiality harm children. An example of this is ensuring that spillages are cleaned up so people are not at risk and ensuring all loose wires and plug sockets are covered. A risk assessment is managed by doing an examination of the workplace and identifying the potential risks which could cause harm to others. A risk assessment is an important step in which protects a business, employers and employees, as well as complying with the law. This helps to focus on the risks that have the potential to cause harm in a workpl ace.When thinking of ways how to control risks it can be a straightforward and simple process, for an example ensuring spillages are cleaned up quickly and effectively, so people do not slip. Risk assessments are simply a careful examination of what, in a work setting could cause harm to people. In order to weigh up whether you have taken enough precautions or should do more to prevent harm. When working in a childcare setting there are many potential risks that could happen, so when starting a shift as a Nursery Nurse member it is our responsibility to complete a checklist.Finding a potential risk, such as a plastic broken toy, we have to ensure that we decide on the most appropriate outcome towards the children, in this case it would be to dispose the toy. Throughout the day, myself and other team members will be making sure that if we see any piece of furniture or toy that are damaged we will deal with the situation MU 2. 4- Contribute to children and young people’s health and safety 2-2. 1 Explain why a safe but challenging environment is important for children and young peopleA safe but challenging environment is important for children as this teaches them to be cautious and will show children what is dangerous and what they should and shouldn’t do. For example when I am working in the nursery and a child hurts themselves then this will show them what risks there are in the nursery and to be more cautious. A safe but challenging environment is also good for a child’s development as it will show them what’s challenging in the nursery room. MU 2. 4- Contribute to children and young people’s health and safety 2-2. 2 Identify the differences between risk and hazardA risk is a situation that could happen but is not existent as this would be a hazard. An example of this is when the children are playing in the water tray at nursery there is a risk that water could spill onto the floor and a child slide over, so when the childre n are playing in the water tray I have to be aware of any water spillage so I can mop it up. A hazard is a situation that is more than likely to cause harm to the children. When it snows outside and the snow turns to ice this is a hazard as the children can slip and hurt themselves on the ice.MU 2. 4- Contribute to children and young people’s health and safety 2-2. 3 Identify potential hazards to the health, safety and security of children or young people in the work setting When working in a nursery there are many potential hazards. Wet floors and spilt drinks are a hazard to both the children and staff; electrical items are more a hazard to children but could potentially be a hazard to staff. Any items which are hanging around or are across the room could cause a trip hazard and this is also a hazard to both children and staff.When children are eating a potential hazard is a child choking in which case a member of staff would have to be first aid trained and have a crb chec k in order to help the child. Viruses can also be spread easily at a nursery so children and staff can easily catch this. MU 2. 4- Contribute to children and young people’s health and safety 3-3. 1 Identify non-medical incidents and emergencies that may occur in the work setting -child falling over -children arguing/play fighting -behavioural issues -complaint from a parent -data loss -lack of planning –

Friday, August 30, 2019

Government Policy Essay

The Wall Street Crash, which occurred in October 1929, was the mass selling of shares, which led to a massive drop in prices, which prompted further selling of shares. In one day, $14 billion was wiped off the value of the stock market. This panic selling was triggered by rumours and fears that the stock market was about to collapse (these rumours were brought about by large share holders, like Baruch and Kennedy dumping shares, and news of the collapse of the British financial empire which was financed by debt and credit, just like America’s). But why did a sudden loss of confidence have such massive repercussions? The answer lies in the long term problems in the economy which had created instability and weaknesses in the economy. Until October 1929 these weaknesses had been masked by the confidence of American people and businesses; the high prices of stocks and shares were the result of speculation – the belief or confidence that they were worth more. But as confidence crumbled, there was nothing left to sustain the economy. The key reason why the economy could not sustain itself was because the policies of the government had created major faults in the American economy, and in every area of the economy, which meant that what started as mass selling of shares resulted in a major Wall Street Crash. Firstly, government policies were responsible for the Bull market of the 1920s. Firstly, the government of the 1920s had essentially promoted speculation by allowing the Federal Reserve to keep interest rates low. This encouraged lending / borrowing, which meant that millions of Americans were able to buy now, pay later for their consumer goods – such as fridges, radios and cars. Similarly, by keeping interest rates low, the Federal Reserve essentially encouraged lending to those wanting the play the stock market, as low interest rates made ‘buying on the margin’ attractive. With as many as 60,000 people involved in buying on the margin (or 10% of American families), and millions more buying now, paying later, the cycle of prosperity and stock market investment was actually based on debt and credit. Secondly, the government encouraged the Bull market by publically rejecting critics who warned of danger signs in the economy. For example, In Sept 1929 Roger Babson warned that the existing prosperity was based on a ‘state of mind’, not on economic facts. He predicted a crash and massive unemployment†¦ but he was criticised as being pessimistic and trying to undermine the country’s wealth. Experts seemed confident that the market was strong and so ignored the warnings of economists. If the government had been more careful about lending and listened to the warnings, people would have only purchased things within their means – rather than buying or investing in what they couldn’t afford. Therefore, there would not have been such over confidence (people believed that high levels of demand, and high volumes of stock market trading proved that the economy was excellent), which means that the stock market would not have been over valued in order to suffer from a loss of confidence and then a crash in the first place. As well as allowing the Fed to keep interest rates low, government policies also led to a Crash by reducing the ability of American businesses to sell their goods abroad. For example, the Fordney McCumber tariff of 1921, which was designed to protect the prices of American farmers’ goods, actually resulted in retaliatory tariffs from foreign countries. For example, Spain, Germany and France put tariffs on American cars and wheat. As a result, when the American economy did begin to slow down in the latter 1920s, businesses and farmers could not sell their surpluses abroad, which led to a drop in profits, and a reduction in production – with an impact on employment. Therefore, had the government not pursued a protectionist policy in the early 1920s, there would have been no loss of employment in the late 1920s, which means production rates would have been maintained, which would have ensured that money was kept in circulation and shares kept their value. To make matters worse, by making it harder for European countries to sell their goods in America, the government’s protectionist policy made it harder for European countries to repay the war debts they owed to the USA. To try and rectify this, the government chose to set up the Dawes Plan, whereby it lent Germany $250 million to pay its reparations to Britain and France. In 1929, the government agreed for Germany to restructure its loan repayments to the USA (the Young Plan), giving them a longer period of time to repay. Whilst in principle these actions were supportive, in practice they artificially propped up the German economy, which led to massive investment in Germany ($3,900million was invested after the Dawes Plan) as investors hoped to make a quick buck, just like they were in the American ‘get rich quick’ / speculative economy. This meant that government policy had in fact encouraged investment at home and abroad based on speculation. When investors realised that the returns (values) of stocks at home and abroad were artificially high, it would trigger a loss of confidence and massive sales – i.e. the Wall Street Crash. Another reason why government policies caused the Wall Street Crash is because the government pursued a laissez faire policy towards businesses and regulation. As a result, the 1920s were characterised by the creation of trusts and corporations – such as US Steel. The government actively ignored anti-trust laws, rather than using their federal powers to police and regulate industry. In a case heard at the Supreme Court the government argued that big businesses were not illegal, so long as some competition remained. However, in reality, the trusts wiped out competition – fixing prices and swallowing up smaller businesses (for every 4 businesses that succeeded in the 1920s, 3 failed). As a result, 1000s of smaller businesses failed, whilst the trusts became ‘captains of industry’, with the knowledge and the money to produce things very quickly and efficiently. This meant the stability of the American economy depended on the actions and profits of a few large companies, such as Insull and Ford, creating a dangerous situation. What is more, the government’s lack of regulation of corporations meant firms like Bethlehem Steel Corporation and Electric Bond & Share were not prevented from using their profits to speculate on the stock market, adding further insecurity (gambling!) to Wall Street. Unfortunately, by the end of the 1920s, many trusts – such as car giants like Ford – were producing more than was needed (and couldn’t sell their surpluses abroad thanks to the government’s tariff policy). As their sales dropped, so did wages and employment, leading to less money in circulation, less demand and a significantly weaker economy. As the trusts’ sales dropped, it also led to fewer stock market investments, which furthered the loss of confidence in Wall Street. Government policy concerning the regulation of banks and banking was also a key factor in the crash. There were no controls concerning mergers and competition so, by 1929, 1% of America’s banks controlled 46% of the nation’s assets. This meant that the stability of the country’s banking system depended on the stability of just 1% of the banks – which was a precarious situation (a Crash could see almost half of the nation’s assets disappearing!). What is more, the lack of regulation in banking meant that the government did not have complete control over the actions of the Federal Reserve Board. For example, in March 1929, one member of the Fed (Charles A. Mitchell) acted without the agreement of the Fed to publically announce that if money became tight because of higher interest rates, his bank (New York’s National City Bank) would personally pump $25million into the broker’s loan market. This was called the single most irresponsible decision of 1929 as it encouraged lending and gambling on stock market to soar at a time when the economy had slowed significantly. The government also did not regulate individuals working on the stock market – for example, greedy individuals like William Durant and his ‘bull pool’ were able to artificially inflate the market for their own gain, only to sell quickly and leave others with significant losses. Furthermore, government policies exacerbated the country’s massive unequal distribution of wealth, which itself contributed to the long-term weaknesses in the economy and hence the crash. In 1929, tax returns of 27million families showed that 12 million families were earning $1,500 a year, or less, and another 6 million families were earning less than $1,000 a year. This put at least 50% of the population in a position of serious economic hardship. In particular, agriculture faced significant problems: the mid-war Federal Farm Loan Act had offered farmers loans at lower interest rates in order to buy machinery to help meet war demand, but these loans became difficult to repay when the demand reduced as the war ended. After World War One, prices for wheat dropped from $2.50 a bushel to less than $1; wool from 90 cents to 19 cents. Although the government passed tariffs to relieve these problems, in the long term tariffs made the situation worse because foreign economies put ret aliatory tariffs in place. The post-war Agricultural Credits Act funded 12 banks to offer loans to any farmers working co-operatively. However, the Act ultimately meant more smaller farmers became in debt. The larger farmers who could afford the loans squeezed the small farmers out of the market. Prohibition made farmer’s problems even worse by cutting the need for grain previously used in alcohol. Ultimately, America’s unequal distribution of wealth should have signalled to the government that its capitalist system was not working – and steps should have been taken to alleviate the imbalanced spending power. Because the government did not alleviate the situation, the divide grew bigger (making these people dependent on credit / loans, which they couldn’t repay because of their lack of employment) – making the economy more fragile and unstable. Therefore, in October 1929, when a massive amount of selling began in the New York Stock Exchange, a mad panic set in. The confidence bubble had burst – triggered by a few rumours and fears that the market was going to crash. Had the government not pursued such a laissez faire approach to the management and regulation of banking and business, and had it responded earlier to the rich / poor divide in American society, the Wall Street Crash would never have happened because there would not have been such over-inflated / false confidence; there would have been foreign markets to trade with; and banks, businesses and individuals would have been regulated and acting in the interest of long-term not short-term gains.

Thursday, August 29, 2019

Black world study Intellectual Autobiography Essay

Black world study Intellectual Autobiography - Essay Example e the black historical experiences, the African-American experience, the race and ethnicity struggles, social stratification and black American renaissance movements that fostered black cultural identity (Bobo and Hudley 43). I am now aware that Africa continent is the cradle of mankind since there is existing documented anthropological evidence of existence of human beings and evidence of early civilizations in countries such as Egypt. The history of European colonization of African countries and enslavement of blacks in Western Hemisphere was critical for understanding how blacks of contributed to the diverse nature of societies across the world and how their force labor furthered overseas trade. I learned about the origins of transatlantic slave trade that mainly supplied slave labor to the southern cotton plantations in the new world between 1400 and 1800. The Jim Crow laws and Black Codes discriminated the blacks and perpetuated slavery by ensuring segregation and arrest of violent and escaping slavery. The blacks encountered extreme poverty during the Great Depression due to lack of formal jobs and low education levels (Bobo and Hudley 47). The program introduced me to the race, gender, and class oppression that affected the Black world and how the blacks struggled to overcome the various forms of enslavement, oppression, discrimination and prejudice in the society. The program enabled me understand how the black struggles against oppression led to the amendments of the US constitution such as the provisions that allowed equal participation in elections and fair justice procedures. Accordingly, black power movements strengthened human right activists who spearheaded the anti-discrimination legislation such as the Civil rights Act of 1967. I have learned that black people used civil disobedience to steer racial reforms and some organizations that spearheaded the demand for equality include Black Panther Party and Black Student Movement (Bobo and Hudley 44).

Wednesday, August 28, 2019

Humanities Ethics Research paper on Embryonic stem cell research

Humanities Ethics on Embryonic stem cell - Research Paper Example Research on the ES cells has then brought to the fore certain considerations with regard to human ethics. For the research to take place, the human embryo has to be harvested in order to investigate the phenomenon of interests. A balance cannot however be established between succeeding in helping another life using the embryo as it continues to exist. What are the ethical dilemmas involved in the embryonic stem cell researches? Despite the hot debate that surrounds the research use of embryonic stem cells, they offer better opportunity for harnessing certain therapies. Due to the controversy, most of the countries have adopted their own different rules that regulate the application of the Human Embryonic cells in research. Opinion is divided on what the value of human life is and the life of the embryo. It then exudes an ethical dilemma that complicates the application of the ES cells in solving most of the clinical problems (James, 45). The moral dilemma establishes a situation in w hich a choice has to be made from the two existing sensitive options. One, there is the duty prevent or relieve patients from chronic pains and two, the duty to respect the inherent value of human life. ... It has been however been difficult to approve one option vis-a-vis the other. The arguments then goes that it is not ethical to destroy embryo given the fact that they possess full moral status beginning from fertilization and as they progresses through maturity. Others observe that an embryo should be considered as a person despite the fact that it is still an embryo (James, 45). They espouse the retention of life of the embryo by stating that there is a continuous process involved in the life of an embryo beginning from fertilization. They note that just like an infant is considered a human being then is the embryo. The argument goes further that people would tend to dismiss the significance of an embryo as a person just because they do not have the characteristics of a human being (Holland, 43). This should not be the angle of justifications because through the process of growth, the embryo will develop the said attributes. They concur that it is arbitrary to determine the period or stage when personhood commences, hence an embryo should not be dismissed as not being a person (Holland, 43). However, another explosive counter argument has continued to make decisions on the ES cells application very difficult. It explains that an embryo lacks the justification levels of being described as a person (Holland, 43). This is because unlike humans they do not have emotional, psychological and physical properties exhibited by humans or persons. As such there is no interest at all that is demonstrated by the embryo to regard protection and should be used to help persons who are in deeper pains with their lives hanging on the balance (James, 45). Another argument indicates a â€Å"cut-off† point at 14

Tuesday, August 27, 2019

The Power of an Apology in Medical Errors Essay

The Power of an Apology in Medical Errors - Essay Example Unfortunately, when doctors make mistakes during treatment, the consequences can be severe. Sometimes, they can even result in serious injury or even death. In recent years, health care providers have become cautious about offering expressions of empathy or sympathy to patients whose disease did not have a successful outcome. These unsuccessful outcomes may be the result of known complications, clear errors, or other circumstances. The health care providers' caution in this area of communication is the result of the increasing number of lawsuits filed against them by their patients. Physicians understandably have a difficult time determining appropriate communication techniques to convey concern for the patient without inadvertently implying their own fault or guilt. It is not unusual for a physician's compassionate and empathetic actions to be misunderstood and later described to a jury as an apology for his error. Unquestionably, there are situations in which clear errors have been committed. Although rare, those events should certainly be followed with a sincere apology and appropriate assistance to the patient. In situations where the physician is not at fault for the undesired outcome, or when responsibility is difficult to determine prior to an investigation, it is still important for physicians to have the skills necessary to express empathy and concern without suggesting to the patient that they bear legal responsibility. Recent studies have suggested that failing to apologize for clear errors may prompt more claims than previously suspected. Research suggests that apology plays an important role in professional relationships. If done properly, an apology may not even be an issue in a subsequently filed lawsuit. Additionally, it is just common sense that demonstrating empathy and concern for patients during their most difficult times is the right thing to do. If lawsuits are subsequentl y filed in such situations, physicians will be seen in a much more favorable light if they have attempted to show appropriate concern and interest in their patients' well being. Many physicians are wondering about the purpose of expressing their remorse to a patient over a bad outcome or untoward incident. Indeed, there are large moral as well as ethical components to this issue. Doctors are also human, and every human has the need to convince themselves that at the end of the day, they did what was morally right. The human conscience may not give the physician in question a peace of mind if they fail to heed the inner voice that wants them to express their feelings of concern. This may apply not only to cases of malpractice where the patient is severely injured or even death occurred, but also in cases where there are not visible consequences. When an apology is truly warranted, accepting and expressing responsibility by the physicians for his or her mistakes is the first step to forgiving themselves and is the most likely way to maintain a good relationship with your patient. Demonstrating true sadness of the physician to the patient for his pain will help him and will strengthen the bond between physician and the patient. It is important for patients to feel that doctors care about them. Patients who have good relationships with their

Monday, August 26, 2019

Multi project assignment Essay Example | Topics and Well Written Essays - 4000 words

Multi project assignment - Essay Example The successful completion of projects and tasks is found to often be as a result of the combined efforts of the various members of a given team. It is important for teams to ensure that they are able to work as a single cohesive unit so as to be able to be effective and successfully manage to achieve the team’s goals. Some of the good things that were seen to happen to the conglomerate team during the course of the game were that the team members showed good cooperation and were able to communicate effectively and unite to work as a team exhibiting sufficient harmony and trust in their work. The groups were also able to effectively deal with some of the risks such as the potential failure by suppliers to make deliveries in good time and the risk of faulty products. In addition to this, changes were well received by the team members who effectively adapted and ensured that that the processes proceeded flawlessly. Communication The open communication of both information and idea s is widely perceived to be an integral aspect of effective team functioning. Good communication between team members is seen to be associated with a number of outputs that are commonly attributed to positive communication between team members, these outputs are seen to typically include good project performance, innovation as well as commercialized products and patents (Neider, 2005). After having been assigned into our respective groups by the tutor, My fellow Conglomerate group members and I met to develop a possible strategy on how we would be able to tackle the exercise that we had been assigned. Having been selected as the group’s PMO, I was tasked with the key responsibility of ensuring that I offered adequate help to my fellow group members to help them calculate the overall and total budget. We managed to elect a PM for our group and the various pairs of the different member countries set off to discuss on the possible strategies that they would use so as to be able to effectively control the timelines and budgets of the country for which that they happened to represent as council members. During this initial meeting, considerable time was spent setting both the deadlines and milestones that the various council members in the different countries were supposed to achieve. In my capacity as the PMO, I also took time to painstakingly highlight individual project objectives for the different council members and upon ascertaining that all the group members clearly understood their assigned roles. All the conglomerate team members actively contributed to the general development of individual checklists that would be used by the team members so as to avoid any eventually of their overlooking critical aspects of their projects. The group had an efficient communication strategy as the team members, the PMO and the PM were able to communicate with each other during the frequent meetings that were scheduled to be held briefly on a daily basis. In the even t that a matter happened to arise, it was possible for the members, the PM and the PMO to communicate via email and phone calls. The team members were able to debate on and unanimously agree on an agreed timeline for the different stages of the projects by different countries so as to minimize any eventuality of the country’s having to spend on the projects while at the same time avoid penalties resulting from reassignments and cancellations of bookings.

Sunday, August 25, 2019

Evolution of Computer Technology Essay Example | Topics and Well Written Essays - 1750 words

Evolution of Computer Technology - Essay Example By doing so the CPU is never idle and optimizes the use of resources. It is a combination of various scalar and vector processing which is responsible for Instruction level parallelism within a single processor. It executes more than one instruction during a clock cycle. It also includes pipelining and makes sure that instructions are taken from a sequential stream are dynamically checked for dependencies between instructions. It enhances the speed of the computer by executing in the following manner: It is a concept where several jobs would run parallel using the common processing resources like CPU and memory. It switches from one job to another taking the processes to execute simultaneously the tasks. This switching is called context switching. The security aspects of multitasking would be the ability of a process to impact inadvertently or deliberately overwrite memory allocation that belongs to another program. The complexity of various operations would make sure that memory requires to be protected from other processes and can be done using semaphores. The semaphores employ two basic functions namely wait and signal to synchronize the process operation and functioning. It allows concurrent access of the processes to memory resources for direct access and control. The memory management schemes involve garbage collection which is more dependent on various memory allocation and release methods for removing objects from memory. Researchers at Xerox PARC have developed a powerful formal model for describing the parameter spaces for collectors that are both generational and conservative. A garbage collection becomes a mapping from one storage state to another. They show that storage states may be partitioned into threatened and immune sets. The method of selecting these sets induces a specific garbage collection algorithm. A pointer augmentation provides the formalism for modeling remembered sets and imprecise pointer identifications. Finally, they show how the formalism may be used to combine any generational algorithm with a conservative one. They used the model to design and then implement two different conservative generational garbage collectors. Their Sticky Mark Bit collector uses two generations and promotes objects surviving a single coll ection. A refinement of this collector (Collector II) allows objects allocated beyond an arbitrary point in the past to be immune from collection and tracing. This boundary between old objects, which are immune, and the new objects, which are threatened, is called the threatening boundary. More recently, these authors have received a software patent covering their ideas. Any type of dynamic storage allocation system imposes both CPU and memory costs. The costs often strongly affect the performance of the system and pass directly to the purchaser of the hardware as well as to software project schedules. Thus, the selection of the appropriate storage management technique will often be determined primarily by its costs. Cache memory: It is a faster and speedier memory which is placed between the CPU and the main memory and is

Saturday, August 24, 2019

Are there enough initiatives currently running to develop respect to Essay

Are there enough initiatives currently running to develop respect to UK football officials - Essay Example gments and the types of belief that give rise to hatred and intolerance (Bodin, et al , 2005, p.163) Commonwealth of Australia (2008) once stated that â€Å"— it (sport) helps to build social cohesion that binds families, communities, regions and the nation. No other facet of our culture has the capacity to bring together so many different streams --- breaks barriers and unites those who have nothing else in common’(Hoye et al, 2009, p.225). These among the few are benefits of sports events inspiring younger generation to achieve excellence in their field. But the present day sports is more than a sport which displays on field violence by the player between team, indiscipline behavior, breaking the rules of the games, cheating which are all part of the game for winning cause. As mentioned earlier, sports benefit the society with positive inspiration but also spoil the society with the players on field behavior. It is further stated that in professional team sports with a high public profile, including association football (soccer), disciplinary transgressions by players and sanctions that are taken by referees provide a rich source of subject material for debate among pundits, journalists and the general public. The action of players and referees in UK football are keenly and intensely scrutinized in the modern day game in the event of foul play which is followed by referee’s action that in turn creates situation between players and referees, wherein players disrespect the decision of on-field referees (Dawson et al, 2007). This is paper attempts to discuss the issues of football players on field behavior and behavior towards football officials in the United Kingdom. Sources of disrespect against football officials World may think that only players are the one who disrespect the referees for their action and results thereupon. But, Jez Moxey, Wolves Chief Executive states that premier leagues ‘big clubs’ and their players disr espect referees and think that they are above the rules and called for players to be booked for disrespect against football officials. He further added that ‘it is outrageous for a player to turn his back on a referee when he’s being booked and for players, who later gang up on officials (BBC, 2011). Not only players, there are incidence when managers or owners of the club have shown disrespect towards match officials or referees for their action on the field. Manchester United Owner, Alex Ferguson was banned for five matches for post match criticism of referees by the FA. Aresene Wenger, Arsenal manager was also not spared from making comments about officials (Reuters, 2011). However, it is reported that Alex Ferguson has been abusing referees not for the first time, and has a long history of abusing match officials. The football association mentioned that assaults on match officials and referees have gone by 25% and attacks on match officia

Cite and Correct Using Risk Assessment Assignment - 2

Cite and Correct Using Risk Assessment - Assignment Example zards as they can hit employees and injure them badly, that is in case they are opened while an employee is walking towards the point where the gate or door is swinging moving. The dust, gas, vapor and fumes produced are hazardous as they may damage parts of the body exposed to the substances. This may cause burns. Additionally, the chemicals may attack some organs, for example the lungs or liver, when the body absorbs some chemicals. An employee may touch bare wire, equipment that ungrounded properly or wet surfaces, which may lead to the employee being shocked. This may cause burns especially when the clothes are get fire. This may even lead to death. Some machinery reduces the levels of oxygen and this is hazardous to anyone in around as it may lead to suffocation. Employees deprived of oxygen for long periods may lead to brain damage and in extreme scenarios, death. For the walking work surface, any person entering the company is at risk. This is because any one can fall and get hurt since everyone is using the same polished floors. Additionally, doors swinging while being opened may hit anyone. Every single individual in the company is at risk of exposure. This is because the chemical produced, dust, gas, vapor and fumes, not excluding the noise, radiation and extreme temperatures, will not spare anyone. However, those people who spend more time in the company are more at risk than the visitors due to longer periods of exposure are. Everyone in the company is at risk in case the oxygen levels are low. This is because everyone needs oxygen and nobody will be spared. This also includes hazardous chemicals increasing to levels beyond the Permissible Exposure Limits. Impacts = the risk is only localized to the polished floors and the risk only increases when the floor is wet. Slipping may cause injuries but the probabilities of the injuries being serious are quite low. Impacts = the risk is only localized to a specific door being opened while an individual

Friday, August 23, 2019

Current Issue Paper Term Example | Topics and Well Written Essays - 1250 words

Current Issue - Term Paper Example Although slums had became common by the end of 20th century and dominated a major part of United states Of America and European society ,evolution of slums had taken place in New York City. History says that Five Points slum area was initially a lake namely Collect. Soon slaughtered waist and garbage started to dispose off at the location of the lake Collect. As the hot summer poured in during the ages of 1800’s, the lake with all sorts of garbage had gone dry with no sign of sanitation taken into account. That was the time when the first slum namely Five Points slum started to originate on that location. People from different nationalities who thought New York City as the key to opportunities and had migrated to the city, had to accommodate at the Five Points slum (Moreno 32). In Europe, slums had gone common by the ages of 1920’s during the Victorian period. Charles Dickens, one of the greatest novelist and realistic writer, defined slum as low and an area of bad housing system. According to a census held in 1920, there were 25,000 slums present only in America (Rogers 33). With the increasing number of slums in any country, economy is deeply affected. At one hand, many political powers seek measures to demolish or upgrade slums for their political interest. These political powers encourage the migration of individuals and families from rural areas to urban areas in order to secure their voting blocks. On the other hand, slums cover major of the population in many countries. In order to strengthen a nation economically, slums need sincere attention. The residents of the slums should be offered rights just like any other citizen. Their life styles and lives should be modified not for personal means. Economy can never prosper when slums and people in slums exist. The influential business tycoons also seek to conquer this small urban settlement area for their personal means and not for the

Thursday, August 22, 2019

Aqua Fish Canada Inc Essay Example for Free

Aqua Fish Canada Inc Essay May 2007 to April 2009 Update Over the past two years, AFC has faced more intense competition, particularly from aquaculturists in Chile. In addition, Chilean output has increased the supply of salmon and the Canadian dollar has strengthened in relation to the U. S. dollar. As a result, AFC has been unable to meet its budgeted revenue targets. Stocks of unsold harvestable fish have increased, as well as the corresponding cost of maintaining the fish, and the company barely made a profit in fiscal 2009. In January 2009, AFC lost one of its largest retail customers, SF Seafood, to a new salmon aquaculture firm, Nu-Farm Inc. This new competitor uses a sophisticated, computerized system for supply chain and product distribution functions. The system allows Nu-Farm to establish web links with customers, such as SF Seafood, and to manage orders and deliveries directly for each of the customer’s retail outlets. This has eliminated the need for customers to make separate arrangements to receive and warehouse fresh fish, and to ultimately distribute the fish to their retail locations. In February 2009, 2,000 kilograms of fish from Site 4 were rejected by three important customers, two of which are located in the United States. An internal investigation revealed that two employees at Site 4 had neglected to follow established procedures and failed to reject some fish that did not meet quality standards and were not certified by the staff veterinarian. In April 2009, employees neglected to secure some of the net-pens at Site 3. During a storm, more than 300,000 kilograms of young fish escaped from these net-pens and most were subsequently lost to predators. Although the company’s property and liability insurance covers criminal theft of fish, it does not cover the loss of fish from disease, parasites, escape, or predators. In addition, there is no liability coverage with respect to food poisoning or diseases caused by the salmon, or environmental damage caused by the farm’s operations. The lost salmon had a book inventory value of $690,000, which was written off in fiscal 2009. The ultimate sales value of the lost fish had they grown to harvestable weight is approximately $1. 5 million. It will cost $200,000 to repair the damaged pens. Domestic and Export Markets Guy Mills is dissatisfied with the company’s geographic sales distribution, which has not changed since 2006, and would like to increase overseas sales. He has requested Juliette Maise to investigate the possibility of opening an overseas sales office. Experts predict that demand for all forms of salmon will grow at a record pace in overseas markets, particularly in developing countries. It is expected that Canada’s international reputation for salmon and other fish will remain high. A market analysis by a respected source, published in May 2009, indicates that the market for fresh salmon is maturing very rapidly in Canada and the U. S. , as consumer tastes become more sophisticated and demand begins to shift to shellfish and various exotic, imported fish. New packaging methods have been developed for mussels, which has enabled live fresh mussels to be exported to markets at greater distances from the farms. The wholesale market price for Canadian mussels has remained stable at about $1. 40 per kg, but is expected to increase to $1. 0 over the next few years. In the past few years, global supplies of American oysters have decreased after hurricanes destroyed a significant percentage of the oyster farms in the southern U. S. At the same time, the popularity of these oysters by consumers in North America and Europe has been increasing. As a result, the market price for American oysters farmed in Canada significantly increased from $1. 80 per kg in 2006 to $2. 70 per kg in 2009. The re-established farms in the southern U. S. are expected to have their first new harvests in another year or two. New Strategic Goals The board of directors met in May 2009 after receiving the financial statements for 2009. Guy Mills provided the board with a summary of selected site and segment data (see Appendix 1), and reported that the decreased profits in 2009 were caused by the Site 3 problems, the decrease in market value, the strengthening Canadian dollar versus the U. S. dollar, and the increased feed costs. He also indicated that he expected the four sites to yield an average of 3. 8 million kilograms (950,000 kilograms per site) of harvested fish per year, assuming that no further unusual losses were incurred. Given the current market conditions and the risk of having to decrease prices or lose export sales to the U. S. , the board decided that the company should move into other markets and diversify into shellfish farming. No dividends would be paid for the next year or two to free up some cash to invest in new projects. The board directed Mills to investigate establishing shellfish aquaculture sites and develop a business strategy for increasing the profitability of the current salmon operations. They indicated that any proposed investment should generate a minimum after-tax return of 10% within five years. Shellfish Aquaculture Opportunity Mills explored opportunities for diversifying into shellfish aquaculture. He found two potential opportunities (a mussel farm and an oyster farm) and wondered which one should be pursued or whether both should be pursued. A summary of the costs and yields for establishing these farms is provided in Appendix 2. Project Blue Wave Over the past two years, Dr. Lily Stern has been investigating what makes some salmon in an aquaculture environment grow more quickly than others, have better disease resistance, and develop higher-quality flesh. Her studies have led her to submit a proposal for Project Blue Wave (see Appendix 3), which would use leading-edge genetic engineering to develop a strain of Atlantic salmon with superior qualities specifically suited to aquaculture. Dr. Stern insists that this is a new approach to finfish aquaculture and feels that it would revolutionize the industry. Executive Meeting – June 15, 2008 Mills suggested that AFC could increase revenues by pursuing overseas markets more aggressively. He also indicated that the company should find ways to decrease operating costs. He presented the two options for expanding into shellfish aquaculture and Dr. Stern’s Project Blue Wave proposal for discussion and asked for any new ideas to achieve the board’s goals. Mills also reported that residents in the vicinity of Site 3 were investigating the possibility of launching a lawsuit against AFC if they could gather enough evidence to prove that the escaped fish were causing environmental damage and contaminating the wild fish. In the past, similar lawsuits have had a 10% success rate with damages amounting to $10 million. Vanic questioned the wisdom of establishing a mussel farm in PEI. He indicated that many such farms become infested with an invasive parasite that attaches itself to the growing mussels. The parasites do not have a significant impact on the growth period or meat yields of mussels; however, maintenance, harvest and distribution costs are significantly increased (20% more variable production, 10% more fixed production, and 14% more variable distribution costs). Employees also dislike handling mussels infested with the parasite. Egin indicated that only about 25% of mussel farms get infected with the parasite. He was more interested in the Project Blue Wave proposal and suggested that it had a very good chance of realizing greater than market returns. He indicated that the RD department had been conducting some preliminary research on genetic engineering and the scientists believe they are on the brink of delivering results, if supported with a little more investment. Jacques Dubois wondered whether the chances of successfully developing a fastergrowing salmon were much lower than Egin or Dr.  Stern realized and that a much larger aquaculture organization, or the government, would be doing this research if it were a project worth pursuing. He felt that too much money had already been spent on RD and not enough on operational efficiencies, supply chain management, and technologies. Dubois also wondered whether AFC should consider adopting IFRSs for financial reporting and, if so, what the major implications of the conversion would be. After the meeting, Mills directed Adam Rice, Controller, to review the company’s strategic options and operational issues. Other Information Rice began by interviewing various staff members, and made the following notes: 1. The variability of the fishing industry has made banks very cautious. Consequently, the Eastern Bank of Canada would be willing to provide a loan of no more than $3 million at an annual interest rate of 8%, on the condition that AFC maintain a gross profit margin of at least 20%. 2. Maise has determined that Paris, France would be an ideal location for an overseas sales office. Space could be leased for CDN$5,000 per month and a local salesperson could be hired for an annual base salary of CDN$20,000 plus a four percent sales commission. Maise estimates that this office could generate annual sales of up to 500,000 kilograms of fresh whole salmon. She also indicated that there is a strong market for oysters in France, if they could be transported in an economical manner. 3. Rob Vanic predicts that world fuel prices will continue to increase and that the risk of spoilage of fresh seafood shipped overseas will double. In fiscal 2009, two percent of overseas shipments of salmon were lost or spoiled before reaching the customers. 4. An investigation of the variable cost variances at Sites 3 and 4 revealed that the employees were overfeeding the fish, resulting in an excess amount of feed falling to the ocean floor. At Sites 1 and 2, the employees are well trained and experienced. 5. In June 2009, an important, high-potential overseas customer asked an AFC salesperson to ship crates of fish purchased for US$6,000 with documentation that stated the value as US$2,000. Apparently, this request was for customs purposes. The salesperson consulted Maise, who indicated that the company’s policy to please the customer applied in this and all other cases. The salesperson brought the matter to Rice’s attention. 6. Costs of preliminary research on genetic engineering have been expensed in the year incurred. 7. Genetic engineering is a common practice in the agriculture and livestock industries. Proponents of organic and natural foods have increasingly complained about the ethical issues surrounding genetic tampering. 8. A discount rate of 10% after taxes is used for evaluating capital investments.

Wednesday, August 21, 2019

Literature Survey On Steganography

Literature Survey On Steganography In this chapter, the literature survey on the steganography and various network security mechanisms are described. Many existing algorithms for steganography since 1991 to 2009 are studied and presented in this literature survey. Numbers of web sites as well as research papers are referred on virtualization, ARP Spoofing, IDS Architectures. The description of the research papers referred pertaining to steganography and network security are given in the subsequent sections. The literature is presented in chronological order for these two areas separately. 2.2 Literature survey on steganography Bender et al. [6] in this paper, the authors describe the techniques of data hiding like low bit rate data hiding in detail. Johnson, N. and Jajodia S. [34] This article explores the different methods of steganography such as LSB, Masking and Filtering and also explains about different software tools present in the market for Steganography, such as Stego Dos, White Noise Storm, S-tool etc. Marvel et al. [38] It is proposed that (Spread Spectrum Image Steganography) SSIS is a blind scheme where the original image is not needed to extract the hidden information unless the receiver possesses the secret key to extract the secret message, otherwise it is virtually undetectable.Thus making this technique reliable and secure. Jessica Fridrich et al.[32] This paper proposes a highly accurate steganalysis technique which can even estimate the length of secret message embedded in LSB method. In this method, the test image is divided into groups of n consecutive or disjoint pixels. This method exploits the modified pixel values to determine the content of secret message. A discriminating function is applied on the group of pixels. This discriminating function determines the regularity or smoothness of pixels. Then a permutation function called flipping is applied on the pixel groups. By using discriminating function and flipping, Pixels groups are classified in to three categories, i.e Regular groups, Singular groups and Unused Groups. For a given mask, fraction of Regular groups Rm and fraction of singular groups Sm are calculated. Presence of noise in the image causes Rm to be greater than Sm. R. Chandramouli and N. Memon[49] It gives the analysis of various methods of LSB techniques for image steganography. Tseng, Y.C et al. [63] This paper presents a secure steganographic scheme which makes sure that if any modified bit in the cover image should be adjacent to another bit that has the same value as the formers new value. By this way the detection becomes extremely difficult. But for achieving this, data hiding space has to be reduced. Da-Chun Wu, and Wen-Hsiang Tsai [23] proposed a differencing steganographic method that uses the difference between two consecutive pixels in the same block to determine the number of secret bits to be stuffed. In this method a range table is used which ranges from 0-255. The difference value is subsequently adjusted to the difference in the same range to embed the secret bits, and the difference between the original difference value and the new one is shared between the two pixels. Extraction scheme in this method is quite simple and it do not requires cover image. Sorina Dumitrescu et al.[55] This paper proposes a new steganalysis technique to detect LSB steganography in digital signals such as image and audio. This technique is based on statistical analysis of sample pairs. By this technique the length of hidden message embedded via LSB steganography can be estimated with high precision. C.-C.Chang and H.-W. Tseng [9] this paper proposes a novel steganographic technique, which modifies the pixel values. This method does not replace the LSBs of pixel value directly, but changes the pixel value into another similar value. In a word, this steganographic method provides a large embedding capacity with little perceptual distortion. Mei-Yi Wu et al. [40] this paper presents a new iterative method of image steganography based on palette which reduces the Root Mean Square error between an original image and its corresponding stego-image. Based on a palette modification scheme, which can embed one message bit into each pixel in a palette-based image iteratively. The cost of removing an entry color in a palette and the profit of generating a new color to replace the old color are calculated. If the maximal profit exceeds the minimal cost, an entry color is replaced in iteration. C.-K. Chan and L.M. Cheng [11] this paper proposes LSB technique in which the secrete data is embedded in the Least Significant bits of the image pixel. Huaiqing wang and Shuozhong wang [29] Different techniques of steganography and steganalytic methods were discussed in detail in this paper. This paper focuses on LSB modification techniques, Masking techniques, Transformation domain techniques, Techniques incorporated in compression algorithms, and spread spectrum techniques. Then the important attributes of a steganographic system are presented, security, payload and robustness. This paper also presents various steganalytic methods such as, RS steganalysis, Chi-square test, Histogram analysis and universal blind detection. Xinpeng Zhang and Shuozhong Wang [65] this paper proposes the steganalysis of PVD method proposed by Wu and Tsai. This steganalysis is based on Histogram analysis. The zigzag scan of the image pixels produces a vector called Image Vector and the difference of every pair of pixels in this vector produces another vector called Substitute vector. An image from Substitute vector is built which is named as substitute image. Histogram of substitute image is constructed and analyzed. Andrew D. Ker [4] Detecting LSB matching steganography is quiet difficult compared to the LSB replacement steganography. In this paper Histogram characteristic function (HCF) is used for the detection of steganography in color images, but it cannot be used for gray scale images. Alvaro Martà ­n et al. [3] Authors have experimentally investigated three different steganographic algorithms. steg, MHPDM, and one of the algorithm used in S-tools. Jsteg embeds a message in the least significant bit of JPEG DCT coefficients. The MHPDM (Modified Histogram preserving Data Mapping) algorithm, which is developed from HPDM (Histogram Preserving Data Mapping), works by altering the least significant bit of a subset of the JPEG DCT coefficients of an image. Chin-Chen Chang et al. [15] this paper proposes two efficient steganographic methods for gray-level images by utilizing the run-length concept. The two methods embed bits of the secret data in each two-pixel block. In addition, the modular operation is applied in both methods to control image quality. The experimental results demonstrate that both methods in this study perform better than all previous methods, in terms of image quality and embedding capacity. Chin-Chen Chang and Tzu-Chuen Lu [13] the method proposed in this paper exploit the difference in the expansion of the pixels to conceal large amount of message data in a digital image. The payload capacity of the proposed scheme is higher than Tians scheme and Fridrichs scheme.In addition, the quality of the embedded image of the proposed scheme is even higher than those of the other schemes. Chin-Chen Chang and Tzu-Chuen Lu [14] SMVQ (Side Match Vector Quantization) exploits the correlations between the neighbouring blocks to predict the index of an input block that improves not only the block effect of VQ, but also the compression performance of VQ. Owing to the good compression performance and image quality, more concerns are given to SMVQ. Suk-Ling Li et al. [56] In this scheme, the best match cover-image block of the secret-image block is first selected based on the block difference. Then, the error-matrix, the normalized error- matrix, the difference-degree and the quantized-error matrix between the cover-image block and the secret-image block are computed. The block header information is embedded into the cover-image by the simple LSB substitution method. Chin-Chen Chang et al. [17] this new scheme classifies the host image pixels into two groups of pixels according to the pixel values. For each group of pixels, the corresponding secret pixel values go through an optimal substitution process and are transformed into other pixel values by following the dynamic programming strategy. Then, embed the transformed pixel values in the host pixels by using the modulus functions and obtain the stego-image. Hideki Noda et al. [27] The JPEG compression using the discrete cosine transform (DCT) is still the most common compression standard for still images. QIM(Quantization Index Modulation) is applied in DCT(Discrete Cosine Transformation) Domain. DCT based steganographic techniques are immune to Histogram based attacks. Two different quantizers are used with QIM, one for embedding 0 and another for embedding 1. Another method called HM-JPEG(Histogram Matching JPEG) Steganographic method is also presented along with QIM-JPEG Steganography. In these two methods embedding of secret message takes place during quantization of DCT coefficients only, not by modifying quantized DCT coefficients. Chin-Chen Chang et al. [12] it presents a reversible data hiding scheme for compressed digital images based on side match vector quantization (SMVQ). In Vector Quantization or SideMatch Vector quantization based methods VQ and SMVQ Compression codes are damaged by the secret data embedded in the message. And they cannot be constructed completely after extracting the secret data. By using this method, the original Side Match Vector Quantization compression Codes can be completely reconstructed, after extracting the embedded secret data. Ran-Zan Wang and Yeh-Shun Chen [51] this paper presents a new steganography method for images which use a two-way block-matching procedure to find for the maximum similarity block for each block of the image. The indexes which get along with some non matched blocks are noted in the least significant bits of the carrier image, using a hop scheme. This algorithm provides a high data payload capacity. C.-C.Chang and W.-C. Wu [8] this paper provides a technique to improve the embedding capacity without reducing the quality of cover file. That technique is called an adaptive VQ-based data hiding scheme based on a codeword clustering technique. Adaptive embedding method is superior to the fixed embedding method in terms of embedding capacity and stego-image quality. Xinpeng Zhang and Shuozhong Wang [64] a novel method of steganographic embedding in digital images is illustrated in this paper. In this method each secret digit in a (2n+1)-ary notational system is carried by n cover pixels, where n is a system parameter. This method offers a high embedding efficiency than that of previous other techniques. Mehdi Kharrazi et al. [39] this paper gives the experimental evaluation of various steganographic and steganalytic techniques. Chin-Chen Chang et al. [18] in this paper, a new watermarking based image authentication scheme is implemented. The feature extraction process of the proposed scheme is block-based, and the feature of a block is obtained by performing a cryptographic hash function. Then, the bit stream of the feature is folded and embedded into some least significant bits of the central pixel in the corresponding block. Po-Yueh Chen and Hung-Ju Lin [48] this paper proposes a new image steganographic method based on frequency domain embedding. The frequency domain transform applied in this method is Haar-DWT. There are three regions i.e., low frequency region, middle frequency region and high frequency region. And embedding occurs in Middle frequencies. Tse-Hua Lan and Ahmed H. Tewfik [61] the authors have proposed an algorithm which is based on the quantized projection embedding method. Quantized Projection (QP), combines elements from quantization that is QIM and spread-spectrum methods. It is based on quantizing a host signal diversity projection, encouraged in the statistic used for detection in spread-spectrum algorithms. Yuan-Hui Yu a et al. [67] in this method, a color or a grayscale secret image is hided in a true color host image. Procedures to different secret image types are independent. There are three image-hiding types, which depend on the type of secret image. The second type is a palette- based 256-color secret image. The third type is a grayscale secret image. Ran-ZanWang, and Yao-De Tsai [52] This paper presents an efficient image-hiding method that provides a high data hiding capacity that allows the embedded image to be larger than the cover image. In this method the image to be hidden is divided into a series of non-overlapping blocks. A block matching procedure is adapted for each block of the image to search for the best matching block from a pool of candidate blocks. This selection of best matching block is done by K-means clustering method. Then the indices of secret image are hidden in the LSBs of best matching block in the cover image. Bibhas Chandra Dhara and Bhabatosh Chand [7] Block truncation coding and vector quantization are the two widely used spatial domain compression techniques. In the proposed method the inter-plane redundancy is reduced by converting RGB to a less correlated triplet. The spatial redundancy is reduced by block quantization using BTC-PF method and the code redundancy by entropy coding using Huffman code. Nan-I Wu and Min-Shiang Hwang [41] this paper presents a survey of current methods of steganography in Gray scale images. The following methods are compared and analyzed in this paper. 1. The simple LSB method : Secret data is hidden in the Least Significant Bits of the Cover image. Quality of 3-bit LSB stego image is merely acceptable. 2. The optimal LSB methods: To improve the quality of stego image optimal procedure is adapted in LSB embedding. When data is hidden the nearest value is hidden in the cover image so that cover image distortion is minimized. 3. PVD method (Pixel Value Differencing): In this method the image is divided into non-overlapping blocks of two pixels in zig-zag manner. The amount of secret data to be embedded is determined by the difference in pixel values of two adjacent pixels. More amount of data can be hidden when the difference of pixel value is high, and less amount of data is hidden when the difference is low. In this method the cover image is not required for extraction of the secret message. 4. MBNS method (Multiple Based Notation System method): This method is based on Human vision sensitivity(HVS). The amount of secret data that can be hidden in a pixel is determined by a parameter called local variation. Local variation depends on Human Vision Sensitivity, and it is determined by three surrounding pixel values. Greater the value of Local variation, more amount of data can be hidden in that pixel. And less amount of data can be hidden in pixel if local variation value is small. When these methods are compared for low capacity hiding PVD and MBNS approaches produce better stego images than LSB based methods. Zhe-ming-lu et al. [68] this paper proposes an image retrieval scheme based in BTC based Histograms. BTC (Block Truncation Coding) is simple and easy to implement image compression technique. To reduce the bit rate of each part of BTC coded triple data, Vector Quantization is applied. Chin-Chen Chang et al. [19] this paper proposes a reversible data-hiding scheme for embedding secret data in VQ-compressed codes based on the de-clustering strategy and the similar property of adjacent areas in a natural image. This method has more flexibility and higher embedding capacity than other schemes. H. Motameni et al. [25] the authors have proposed a novel technique for hiding text message in a grayscale image. In this method different colors in the cover image are labeled in order to identify dark regions in the image. Data embedding in the these darker regions results in high quality stego images. This method offers more security than other LSB techniques. Zhensong Liao et al. [69] this paper summarizes the present techniques of data hiding capacity techniques. Various Communication channel models and host data models are discussed in this paper. H. Arafat Ali [24] the author, proposes a spatial domain steganographic scheme for JPEG images. This technique is based on statistical analysis and called IVSP (Improving Visual Statistical Properties) Method. This proposed method enhances the statistical properties of the stego image and also reduces the quantization error, which creeps in with JPEG format. And this method is also more secure when compared to the other techniques which are in use presently. Youngran et al. [66] this paper proposes a new method which is able to provide high quality stego image. According to pixels characteristics, number of bits can be embedded in stego image is varying and also providing the integrity of original data. Andrew D. Ker [5] Batch steganography problem deals with spreading payload in multiple covers. Author has proved that the secure steganographic capacity is proportional to the square root of the total cover size. Hong -juan zhang and Hong-jun tang [28], Proposed a novel method of image Steganography which can withstand for statistical analysis tests like RS and Chi-Square steganalysis techniques. Kazuya Sasazaki et al. [35] this paper proposes scheme for hiding data that loss lessly stuffs a data bits into a carrier image using the two differences. In this scheme, a three-pixel block in an image contains two absolute differences-the difference between pixels one and two, and the difference between pixels two and three. Such a difference is called block difference. Chung-Ming Wang et al. [21] this work is an improvement over Wu and Tsai scheme of pixel value differencing (2003). In this method the image is divided in to the blocks of two consecutive pixels and the number of bits that can be embedded is determined from the width of the range table. The reminder of sum of two pixel values with width of suitable range is calculated and modulus of pixel values is adjusted to the decimal value of binary string to be embedded in the block of two consecutive pixels. This method also addresses the falling-off boundary problem and produces high quality stego images than any other technique of spatial domain steganography. But the hiding capacity is low in this method when compared to other methods. Chien-Ping Chang et al. [20] Authors have proposed a novel data hiding scheme that embeds a message into a cover image. This method uses Tri way pixel value differencing method. In this method blocks of four pixels are considered at a time. This four pixel block is divided into three pairs. And the PVD method is applied separately to these three pairs. From the modified pairs on pair is chosen as a reference pair and other two are adjusted. By this method the hiding capacity enormously increases over Pixel Value Differencing Method. But the quality of stego image when expressed in terms of PSNR value decreases. Adem Orsdemir et al. [1] this method is based on the Higher Order Statistics Steganalysis. Generally any steganographer focuses more on undetectability and payload but not about the statistical difference between the stego image and cover image. When the steganographer is well aware of the steganalysis methods HOS steganalyzer and by formulating statistical in distinguish ability requirement, visual quality requirement, and detect ability requirement the method of steganography can withstand the steganalysis methods based on statistical differences. Chin-Chen Chang et al. [16] It is proposed in this method that digital images can be compressed using Block Truncation Coding (BTC). BTC is the most efficient spatial domain method with simple computations and acceptable compression rates. Zhiyuan Zhang et al. [71] generally in two-description image coding the image are partitioned into two parts and each description is produced by alternatively concatenating a finely coded bit stream of the other part. Multi Description Coding is a reliable method for robust transmission over unreliable networks. H.B.Kekre et al. [26] This paper proposes a new improved version of Least Significant Bit (LSB) method. Before embedding the data a 8 bit secret key used and XORed with all the bytes of the message to be embedded. Message is recovered by XOR operation, by the same key. Depending on the MSBs the number of bits of LSB utilized for data embedding are calculated. This method is simple to implement and offers high payload than other methods like PVD. Sathiamoorthy Manoharam [54] analyzes the steganalysis of LSB technique using the RS Steganalysis technique. The two classes of images- natural photographic images and synthetic images are taken as the cover medium. Ahmad T. Al-Taani and Abdullah M. AL-Issa [2] the proposed method provides good quality and high embedding capacity of stego image. Here the carrier image is divided into blocks of equal sizes and then stuffs the original data bits in the edge of the block depending on the number of ones in left four bits of the pixel. Experimental results of this method are compared with Pixel Value Differencing method and Gray Level Modification Method. P. Mouli and M. Mihcak [45] described the data hiding capacities of various image sources. Hong -juan zhang and Hong-jun tang [28] Proposed a novel method of image Steganography which can withstand for statistical analysis tests like RS and Chi-Square steganalysis techniques. 2.3 Literature survey on Network Security John McHugh et al. [33] this paper describes the role of an IDS in an enterprise and also gives survey on mostly used intrusion detection techniques. This paper also describes the various representative systems from the commercial, public, and research areas. Ray Spencer et al. [53] this paper, proposed a Flask micro kernel based operating system, security architecture which provides the solutions for the access rights sort of problems and it is suitable for many operating environments. Clive Grace [22] it gives a detailed understanding of various types of attacks possible and also various types of intrusion detection systems and soft wares. Nong Ye et al. [42] this work paper gives an investigation on a multivariate quality control technique. This method is finds a long-term profile of normal activities in the profiles in order to detect intrusions. Tal Garfinkel and Mendel Rosenblum [59] it proposes the Intrusion detection architecture and also the results are demonstrated to detect the attacks using the IDS which are completely isolated from the monitored host. Tal Garfinkel et al. [58] This architecture provides a tamper resistant trusted hardware platform where each and every application will be running on either the open platform that is general purpose platform or the closed platform that is general purpose platform with security and integrity properties. P. Englund et al. [43] this paper describes the trusted platform which provides a strict control over the software and hardware platforms to withstand the various vulnerabilities. Suresh N.Chari and Pau-Chen Cheng [57] Blue box, the host based IDS, is designed based on the system call introspection. They designed some set of fine grained rules for access control to the system resources. M. Rosenblum and T. Garfinkel.[37] It describes the virtual machine monitor and also how the VMM is useful to provide security. It also looks after the various implementation issues and future directions for the VMM. James E. Smith and Ravi Nair [30] in this paper various levels of abstractions of virtualization and also the architecture of virtual machines are described. Process and system virtual machines are also described over here. Peyman Kabiri and Ali A. Ghorbani [47] it gives a review on current trends and technologies implemented by re- searchers and also elucidated the applications of honey pots to detect attacks. Petar Cisar and Sanja Maravic Cisar [46] this paper describes a flow based algorithm combined with data mining techniques for intrusion detection. Jenni Susan Reuben [31] this paper gives a literature survey on various security issues as well as threats which are common for all virtualization technologies. Zhenwei Yu et al. [60] this paper gives an experimental result for an automatically tuning intrusion detection system which controls the number of alarms output to the system operator and according to the feedback mechanism provided by the system operator, tunes the detection model when false alarms are identified. The Flask architecture of security enhanced Linux for red hat is described in detail in this website [81]. 2.4 CONCLUSION This literature described the various methods and algorithms existing for the steganography and network security. Based on the existing algorithms, the conclusions are proposed to provide the efficient methods for the below 1. Data Security 2. Network Security 2.4.1 Data Security For providing the data security, there are many cryptography and as well as steganography methods existing for the data to be transmitted on the channel. But for any algorithm, it is has its own disadvantages. In the case of Steganography, the basic algorithm is LSB algorithm and some variations on the spatial domain techniques. But at any point of instance, algorithm is public. Once the algorithm is known, attacker will be trying to get the secure data. In this thesis two algorithms are proposed to provide the data security, which were not presented so far, which are as follows: Highly Secured, High Payload and Randomized Image Steganographic Algorithm using Robust Key: In this proposed method, the algorithm used for steganography process is either the PVDM or LSB algorithms depending on the inter pixel difference value in order to increase the data stuffing capacity with out disturbing the quality of the stego image. The position of pixels where to stuff bits will be decided by the stego key which is randomly selected by the user and this key is transmitted to the other party in encrypted form. So the key is robust. Highly Secured, High Quality, High Payload and Randomized Image Steganographic Algorithm using Robust key based on Tri way PVDM Method : In this proposed method, the algorithm used for steganography process is the Tri way PVD with Modulus which is an extension of Tri way PVD [20] in order to increase the stego image quality. The position of pixels where to stuff bits will be decided by the stego key which is randomly selected by the user and transmitted to the other party in encrypted form. So the key is robust. 2.4.2 Network Security For Providing the Network Security, There are many software and hardware devices available like firewalls, IDS etc.,. Generally an intrusion is detected by the IDS, immediately that can be patched by using the available techniques, meanwhile the applications are to be stopped temporarily, where as the proposed trusted architecture for providing network security will provide a self healing intrusion detection system without disturbing the actual state of the system, and trust can be taken back to the system by using the virtualization concepts.

Tuesday, August 20, 2019

Early History Of Public Health Health And Social Care Essay

Early History Of Public Health Health And Social Care Essay Contemporary public health has evolved through various historical stages. Its development as a discipline has been shaped throughout many years from the ancient times to the present day and different pioneers from different countries tremendously contributed to its historical evolution. Furthermore, public health evolution has been marked by several changes since its inception and these changes were influenced by the newly developed ideas and scientific evidences for the purpose of improving the health of the population (Porter, 1994). The essay here, in its first part, will attempt to discuss in more details the most important changes that public health has undergone in the course of its evolution and why these changes occurred. In the second part, the explicit meaning of essential components of public health will be discussed and the way these should be achieved will be proposed throughout. MOST IMPORTANT CHANGES IN THE HISTORY OF PUBLIC HEALTH AND REASONS FOR THESE CHANGES Throughout human race history, health problems have existed and have been concerned mainly with community well-being. Most of these health problems were mostly caused by communicable diseases related to poor physical environment, insufficiency supply of water and food of good quality and poor provision of medical care. Interventions to cope with the above health issues have changed over time but closely linked and this led to what is known today as modern public health (Rosen, 1993, p.1). 1.1. Early history of public health Available literature demonstrates that there are evidences of activities associated with the improvement of community health that have existed from the ancient times. Rosen (1993:1) outlines that, in the north India some 4000 years ago, archaeological findings have shown that there has been a developed urban planning system with great sanitation and housing. He further adds that other evidences have shown, in other Asian countries, that the same system was largely developed mostly in Egypt to mention but a few. Apart from the above earliest development, public health continued its evolution over the centuries pioneered by several authors among them Hippocrates. This honored Greek physician, also known as the father of medicine because of his commendable contribution on the practice of medical ethics for physicians demonstrated how proper diet, fresh air, a moderate climate and attention to lifestyle and living conditions were important for healthy living (Schneider Lilienfeld, 2008:5). Later on, other societies inspired by the Greek civilisation, as it is the case for the Romans, continued to develop water and sanitation infrastructure and healthcare system. Schneider and Lilienfeld, (2008:5) reported that, further to the public health systems that were just introduced, Romans put in place governmental administration systems to overseeing the initiated changes. However, these early public health initiatives did not benefit all the population; vulnerable groups like slaves and those living in poverty did not have access to the safe drinking water and adequate sanitation and continuously suffered high rate of diseases as it is now noted in some parts of the world (Schneider Lilienfeld, 2008:5). 1.2. Middle Ages After these early development of public health, came the Medieval Ages (500-1500 A.D.) that were characterised by a decline of the Greco-Roman powers due to disintegration from within and invasions from outside that destroyed public health infrastructure(Rosen,1993:26). During this period, health problems were thought to be having spiritual causes and the remedy as well. This belief was shared by both pagans and Christians. It was believed, for Christians, that there was a link between sin and the occurrence of disease and the latter was considered to be a punishment (Rosen, 1993:26). Biological and physical environment as the main factors in transmissible disease causation were ignored and this was the main implication of the spiritualism during this era and as a result it was difficult to control the epidemics that erupted leaving millions of people dead and others suffering from their sequels (International Health Sciences University, 2012). Rosen (1993:35) states that the 2 devastating epidemics that may be considered which prevailed during this time are the Plague of Justinian and the Black Death in 543 and 1348 respectively. Moreover, other outbreaks between the above 2 dates ravaged Europe and other regions around Mediterranean Sea notably but not exhaustively: leprosy, smallpox, diphtheria, measles, tuberculosis, and scabies. Causes of these epidemics were not identified yet but it was thought that poor living conditions were highly associated. After these horrific epidemics occurrence, various measures were put in place in Europe cities to fight against them and consequently improve public health. Establishment of butcheries and regulation about livestock possession, regulation of food at public market, food preservation and garbage disposal are the measures that proved to be effective in preventing disease transmission from animals to people or between people. Additionally, food preservation regulation played a key role in prevention of food borne diseases from damaged and expired food (International Health Sciences University, 2011). 1.3. Renaissance Era The development of public health did not stop in Middle Ages. The followed period of renaissance (1500-1700 of Christian era) was marked by a rejection of older theories. However, the old theories helped in developing new ones. Spiritual theory about the cause of disease started to be doubtful as epidemics killed both sinners and saints. Environmental factors were uncovered to be the leading cause in the development of infectious diseases. Further critical observations of sick people, signs and symptoms they presented have shown that various illnesses were distinctly separate (International Health Sciences University, 2012). It is worth to note that, during renaissance era, various authors brought new discoveries in the development of public health. Rosen (1993) reported that the Italian Giolamo Fracastoro brought in the theory of contagion where he showed the role of microorganisms in infectious diseases development and the way the communicable diseases are transmitted. The Dutchman Anton von Leeuwenhoek, the inventor of microscope, was the first man to confirm that the theory Giolamo Fracastoro developed was probably true after his observation of microbes agents. Indeed, the contribution of other authors (Petty, John Graunt and Gottfried Achenwall) in this important era of public health evolution was significant. They introduced the concept of measurement in public health to quantify health problems like calculations of mortality, life expectancy and fertility (Rosen, 1993). Despite this new era of rethinking and developing new ideas about public health, some diseases like malaria, smallpox and plague continued ravaging and killing many people in some European countries. Also, travels and movements between urban and rural areas dominated this era, explaining the spread of these illnesses to other areas causing suffering to their inhabitants. 1.4. The enlightenment epoch This is the period from 1750 to mid-nineteenth century (Encyclopedia of Public Health, 2002).The enlightenment era is considered to be the era where public health discipline has known tremendous progress. Rosen (1993) states that enlightenment era was seen as pivotal in the development of public health. Industrial development was the main turning point during this era. Likewise, social and political development has remarkably had a great impact on societal transformation and the knowledge about the way communicable diseases are spread has increasingly improved. (Encyclopedia of Public Health, 2002). Despite the remarkable changes, it is stated that health conditions were still demanding due to the great number of people moving towards industrial areas in the cities, poor sanitation system and insufficiency in clean water supply. Additionally, working conditions were not conducive for those mainly working in mines and factories. All of these factors largely contributed to the spread of diseases (Rosen, 1993). In England, Edwin Chadwick demonstrated the reality of poverty disease cycle and attempted to measure the association between poverty and disease. Also, Chadwick linked the disease with environmental factors. His report The Report of a General Plan for the Promotion of Public and Personal Health (1850) attracted attention and is considered by many as one of the important documents of modern public health (Encyclopedia of Public Health, 2002). Chadwicks evidences were later proved by John Snows work during the famous 1848 London cholera outbreak where he identified that the contamination of water pump was the probable origin of the epidemic (International Health Sciences University, 2012). Towards 19th century, new discoveries in bacteriology emerged. The great work of the Frenchman Luis Pasteur collaboratively with other scientists showed that micro-organisms were responsible of diseases occurrence thus proving to be false the theory of spontaneous generation developed before; henceforth the germ theory was born. Later on, the Germany Robert Koch proved that one micro-organism causes specific disease (International Health Sciences University, 2012). Following these remarkable findings, some medicaments were developed including some disinfectants which became popular in medical practice and as a result, mortality and morbidity rates declined significantly. Additionally, the identification of microbes as causative agents of diseases resulted in an establishment of immunology as a science and subsequently the vaccines were developed (International Health Sciences University, 2012). 1.5. Twentieth Century Early on, decrease in mortality and morbidity rate was significant following the bacteriology emergence in later 19th Century. On the other hand, serious health problems did not disappear; infant mortality among others. It is reported that, for the time being, in Europe and in the United States of America health programs for improving maternal and child health were developed (Encyclopedia of Public Health, 2002). Academic programs in public health were developed, given the growing scope and complexity of public health problems, to deal with research issues and to train public health personnel. Health organisations agencies and charities were established in tackling public health concerns for particular groups of population (Rosen, 1993). Later on in twentieth century, expansion of public health roles continued and its horizon broadened. However, 1920s and early 1930s saw a slow development of public health. There was a decline in disease prevalence as a result of establishment of sanitary measures. In the aftermath of World War II, there was an increasing growth of health infrastructure in the curative field but little attention was paid to planning 1960s and early 1970s marked what was named period of social engineering. The main characteristic of this period was the economic growth chiefly in the United States of America but part of the population were medically uncovered (International Health Sciences University, 2012). Later 1970s to 1980s, health promotion initiatives, eradication of certain diseases that ravaged the world before and the emergence of new infectious disease were making headlines. Encyclopedia of public health (2002) states that the emergence of Human Immunodeficiency Virus infection, use of addictive drugs and air pollution were the main preoccupations of World Health Organisation and other international agencies. Conclusion As a final point, it is obvious that public health as a discipline has its own history which evolved over time from the early history of human race till today. The focus of public health enlarged as time advanced as health problems. At the same time, the future of public health will be and will remain of an utmost importance in solving populations health where everyone is invited to play his/her active role. MEANING OF ESSENTIAL PUBLIC HEALTH COMPONENTS AND THE WAY THEY SHOULD BE ACHIEVED 2.1. Collective responsibility for health and the major role of the state in protecting and promoting health Health sector is the main sector that deals with the health of populations. However, this does not mean that its activities are the only concerned with the promotion of community health. World Health Organisation (2013) states that the health of populations is determined not only by the health sector but also by social and economic factors, and henceforth, policies and other actions other than of those of health sector. In developing health policies, governments should work collaboratively with other sectors involved in development process such as finance, education, agriculture, environment, housing and transport to see how their planning can reach their objectives while also improving health. Also, this intersectoral partnership helps in tackling other health related issues such as those activities that pollute environment or promote those activities aimed at having access to quality education or gender equality. 2.2. Focus on the whole population Public health activities are intended to promote the health of the whole population rather than individuals health. According to Riegelman (2010) the first thing to come to mind, in public health, is the health of the community and the society in general. Indeed, in public health the activities to improve the health are no longer individual-centered but rather population-centered. To achieve this, collaboration between all development sectors is needed given the wide view of public health. The involvement of all development actors is seen as a comprehensive way of thinking about the scope of public health and it is an evidence-based approach for the analysis of health determinants and illnesses. This leads to evidence-based interventions to protect and improve health (Riegelman, 2010). 2.3. Emphasis upon prevention Prevention constitutes a key component of public health practice. It has been said that prevention is better than cure; this statement shows how much prevention activities are of a paramount importance in public health. Health promotion and disease prevention activities play a key role in tackling health problems that the community faces which, in many cases, are preventable (World Health Organisation, 2002). Strategies for prevention that aim to alleviate the risk factors by promoting healthy behaviours and reducing dangerous exposures need a collaboration between government and different stakeholders and active participation of the population(World Health Organisation,2002). 2.4. Recognizing underlying socio-economic determinants of health and disease Socioeconomic determinants with other determinants of health (biological, environment, culture, personal behaviour, living and working conditions) mostly influence the health status of population. Further, these health determinants may interact with other factors for better or worse. Importantly, socioeconomic factors are thought to be major determinants of health. Washington State Department of Health (2007) reports that Health impacts associated with lower socioeconomic position accumulate and persist throughout the lifespan. The partnership between public health professionals, community, nongovernmental organisations and governmental institutions is a major force to fix this issue (Washington State Department of Health, 2007). 2.5. Partnership with the population served The collaboration with the community in addressing health issues is a core part of health promotion activities. Declaration of Alma Ata (1978) claims that the maximum involvement of community and individual self-reliance and the active participation in planning, organisation, operation and control of primary healthcare are the basis of success in health promotional activities. Therefore, policies, strategies and plan of action should be established by the government to ensure that primary healthcare is launched and sustained as a core part of health system in partnership with other sectors. 2.6. Multidisciplinary basis Multidisciplinary feature of public health is unquestionable. According to Tzenalis Sotiriadou (2010:50), the engagement of various stakeholders in the task of improving health of population shows that promoting health does not belong to one group of professionals or sector of health services. The joint action from various professional groups at every level is reported to be effective and recommended in providing health promotion services (Solheim, Memory Kimm 2007 cited in Tzenalis Sotiriadou, 2010). Conclusion Altogether, the above described core components of modern public demonstrate how much wide the discipline of public health is. The active participation of all involved stakeholders is the key towards the success of public health practice.

Monday, August 19, 2019

Essay --

â€Å"In July 1945, the first atomic bomb was tested in New Mexico and the next month the second and third weapons off the production line were dropped on Japan. Since then no nuclear weapons have been used in anger, although tens of thousands have been accumulated by the major powers and their destructiveness and sophistication increased immensely.† The nature of warfare is constant and evolved from multiple factors and military revolutions over time. The purpose of this paper is to identify the most important military revolution in history and highlight its effects that permeate modern day society. The proliferation of nuclear weapons is the most significant military revolution that led to the greatest changes in warfare, which include the immergence of new threats such as non-state actors, the shift from total war to low intensity conflict, and the importance of technology and innovation. This military revolution completely shattered existing paradigms of warfare due to th e real threat of nuclear weapons’ total destruction of humanity. The arrival of nuclear weapons transformed the international playing field permanently and new threats such as non-state actors have immerged as a result. Initially, only superpowers with nuclear arsenals had a global role as was evident during the Cold War between the U.S. and Soviet Union, but nuclear proliferation triggered a race to possess this power in the last 60 years. The possible employment of nuclear weapons between the two superpowers during the Cold War was unprecedented. The power of this stalemate shattered the paradigm of warfare and demonstrated how significant this military revolution’s effects were even at the mere threat of nuclear weapons use. Regarding this standoff between t... ... examined the importance of the nuclear weapons military revolution and its lasting impacts on modern day society. Evidence presented supports why this military revolution had the most impact of all on warfare and was carefully illustrated through the immergence of new threats, the shift from total war and high intensity conflict to low intensity conflict and finally, the critical role that technology and innovation has played since the advent of nuclear weapons. This is important in today’s operational and strategic environment due to the fact that American military and political leadership will continue to have taken in account the use of nuclear weapons on the battlefield. As globalization continues to set the conditions for nuclear weapons proliferation worldwide, the restraints and operational risks will dramatically increase and affect all strategic planning.

Sunday, August 18, 2019

expatriate failures :: essays research papers fc

EXPATRIATE FAILURES â€Å"The internationalization of business has proceeded at a rapid pace as the world has become a global economy.†(Mathis, Jackson 2000) This is the very reason why companies now have the need for international executives. As all aspects of a business spread worldwide, so must the employees. An expatriate by definition is a home-county national, usually an employee of the firm, who is sent abroad to manage a foreign subsidiary. (Rodrigues, 2001) A successful expatriate generally requires an extensive amount of time and money, however, a failed expatriate can be even more costly for an organization. A study of multinational corporations showed that 69% (of the firms surveyed) had recall rates of expatriates between 10 to 20 percent. Compared to Japan and their figures, (86% of firms had less than 5% recall rate) the United States has room for improvement. (Tung, 1981) There are many reason for expatriates to fail and many differences between Japan and United States’ hum an resource management planning.   Ã‚  Ã‚  Ã‚  Ã‚  One of the main reasons why expatriates fail is due to the social and physical environments of the foreign country. Adaptation problems can effect the on-the-job effectiveness of the expatriate. Different value systems and living habits are a main cause of adaptation problems and the inability to communicate only worsens the problem. Lack of communication verbally and nonverbally can affect every aspect of a persons career and person life. If someone can’t communicate, imagine the difficulty of going to the bank, dealing with customers, and even going grocery shopping. In addition to the new surrounding environments, if the expatiates family can not accompany them or is not happy with the new living arrangements then it could result in separation anxiety. Humans need to feel secure in their environments and with all of these downfalls it is extremely difficult to accomplish. When an expatiate is not happy with their situation, it will reflect on their job p erformance.   Ã‚  Ã‚  Ã‚  Ã‚  Some other reasons for expatriates to fail are differences in the managerial and organizational principles. If a foreign country has different principles than the home-country than implementation can be very difficult. This also applies to objectives and policies. With such differences the expatriate may need to conform to the local situation. â€Å"If the expatriate manager’s authority is visibly constrained, his or her opportunity to establish and maintain an effective relationship with local associates is diminished.† (Rodrigues, 2001) An expatriate’s authority can appear constrained if the home office overcentralizes the decision making.