Friday, May 31, 2019

Essay --

Tara J. Yossos book Critical Race Counterstories along the Chicana/ Chicano informational Pipeline uses a unique set of critical race counterstories focused on teachers and students in the Chicana / Chicano community. It reveals a great deficiency in appropriate U.S. education and investment but demonstrates the richness of the culture of minorities and interest in innovative approaches to education. This innovative work, in comparison to works published by many pencil lead researchers, uses critical race theory to give stories along the educational pipeline from primary school to university. It is an absorptive work giving voices to the largest minority in the joined States, presenting the latest demographic research on the status of Chicana / Chicano students education at the time of its publication in 2006. Within the first chapter, we are presented with the foundation for this research and the sad reality of Chicana/ Chicano education within the United States. In the U.S. the group with the lowest educational progress is the fastest growing racial / ethnic minority population in o...

Thursday, May 30, 2019

Urban Legends on the Web Essay -- Exploratory Essays Research Papers

urban Legends on the WebUrban legends are winning to almost everyone, and it would follow that in that respect would be many websites available for the discussion of them. A simple search turns up thousands of hits on the subject, so how do we sock which ones to believe? A good research site will have detailed development explaining the confirmation or rejection of the legend. References must be cited, especially when a legend is being proven as true. In addition, the site should also be easy to navigate and convenient. In my own curiosity, I have flow across two sites that are excellent, the About.com Urban Legend Guide, and the Urban Legend Reference Page found at www.snopes.com, which was created by the San Fernando Valley Folklore Society. In contrast, on that point are websites (not to mention e-mail chain letters) that perpetuate false legends, and those that just do a poor job of evaluating them. In this category is the Urban fiction Archive, an amateur archive of myths heard in New Hampshire and Monkeyburgers, a site filled with interesting legends, but lacking adequate proof to back the conclusions. The About.com Urban Legend Guide, address, http//urbanlegends.about.com/culture/beliefs/urbanlegends/mbody.htm?COB=home&PM=112_100_T, is an invaluable resource for researching urban legends. Upon signing into the page, the reader is given a list of topics to choose from, which evermore includes currently circulating hoaxes and legends as well as an archive full of information on every conceivable legend and internet hoax. Around Halloween time, of course, there are ghost stories and legends of the past that are explained and critiqued, but I found the most interesting section to be the one on e-mail hoaxes. all(prenominal) individual with a... ...good research tool, but it needs some more concrete evidence. After all, how earth-closet we believe that the author is correct without proof? That is as bats as believing an urban legend just because yo ur brothers girlfriends cousin told you so. In my search for urban legend sites, I found an incredible amount of information on the net, some of it high quality, professionally presented information, and the rest simply unverified. The truth is that the connection we experience as part of the World Wide Web can either work for or against us. If we choose to evaluate information carefully before we accept it, and, more importantly, before we pass it on to others, the Web is invaluable. If, however, we add the information from a website and assume it is true without adequate proof, we are just perpetuating myths and untruths. This is the importance of critical reading.

Wednesday, May 29, 2019

Plastic Not Paper -- essays research papers

charge card Not Paper     Walking through the grocery store I always prove to look for the best buy.I always buy whats on sale, I guess you could say Im cheep. Then I get to thecheck out lane, preferably the one with fewer people. I empty my wallet and pay.Then I wait. I think its going to happen but I am non sure. Then it does,the baggier says, "Would you like paper or plastic?" I look that person rightin the eye and I tell him, "I essential the one thats better for the environment, Iwant the one that will help prevent pollution, I want the one that cost less, Iwant plastic." Plastic bags uphold money, they conserve energy, they are operable and they are better for the environment. Thats why plastic bags arethe best choice at the check out line. Of manakin your wondering how plastic bagssave money, well just think 2,000 paper bags stacked on each other reaches aheight of active 7.25 inches, season paper reaches a soaring height of 7.5 fee t.This means it takes seven trucks to deliver the same amount of paper as oneplastic delivering truck. Talk about a big waste of gas. Plastic bags costabout of a cent to make, while paper cost close to 3 cents. This is money wesave as well as the store owner. This is a lot of money that is going towaste considering that plastic bags are so much more practical then paper. Youcan use them for lots of other things. You can take on trips to the grocerystore, your can protect dry clothing ...

Evil Women Essay -- essays papers

Evil Women Women are not always the affectionate, compassionate, and nurturing people that humanly instincts make them out to be. On the contrary, they are sometimes more ruthless and fierce than their male counterparts. A good example of this idea is in William Shakespeares Macbeth. Through the use of various feminine roles throughout the bestow, Shakespeare manages to portray how dramatically of the essence(p) the witches are, along with how imminent greed and power can eventually grasp hold of Lady Macbeths morals, and thrust her into a state of emotional stupor. Shakespeare begins the play with the witches for several reasons. First, the fact that they are witches portrays many evil themes since witches are a universal symbol for an advocate of the devil. They themselves foreshadow malign events to come. For example, to add to the witches representation of evil, the clichd background is that of thunder and lightening, which also represents wickedness and confusion. S hakespeare also uses the witches to give some background to the play they decide to meet with Macbeth when the battles lost and win. Here, Shakespeare makes clear the fact that there is a battle taking place and Macbeth is involved. They choose to meet with Macbeth upon the heath, wherein a heath is described as be uncultivated, open land. The uncultivated aspect of the heath can be used to foretell the uncivilized intentions the witches engage for Macbeth. The last line of the scene is immensely important, for when the witches posit that fair is foul, and foul is fair, the reader Komery later understands that this is the main theme of the play. This implies that appearances can be deceiving. What appears to be good can be bad, and this ... ...me will to have the throne, even at the cost of her own offspring. Similar to the witches, after Lady Macbeth states her desires to become male, Macbeth enters her room, and a discussion about the murder of King Duncan ensues. The dramatic put in that the witches and Lady Macbeth bring to the play is great. Without them, there would be no play, since Macbeth would have never even considered killing his faithful friend, King Duncan. Yet, because of them, he becomes torn amongst his lover and his comrade. Lady Macbeths greed for power overwhelms her to the point where she would sacrifice anybody that stands in her path. The witches toyed with Macbeths head just enough so that he conceit he could commit the murder within reason. In the end, these two rationalities led to the death of King Duncan, physically by Macbeth, but mentally, by the women in his life.

Tuesday, May 28, 2019

George of the Jungle :: Art

George of the JungleThe film George of the Jungle, directed by Sam Weisman is a romantic frivolity and parody. In other words, it fully rips off Tarzan and makes a classic story seem stupid. In the beginning of the film the earreach is shown a short animated cartoon nigh how George came to be in the jungle. When he was a baby George was flying over the jungle in a plane when it crashed. The passengers never found him and so apes raise him. past the scene it cut to the present when a woman called Ursula came to the jungle as a tourist. She meets her materialist fiance Lyle Vanderbrute unexpectedly who wants to get forth of the jungle as soon as possible. Lyle drags Ursula to see the apes but then a lion traps them. This is when George appears and saves Ursula by owning the lion in wrestling. Then George carries her off, takes care of her and goes back to the city with her. Then Georges friend, an educated ape, is kidnapped by poachers and George races back to save him. Ursula realises she loves George and goes after George, helps him bash the poachers and lives with him happily ever after in the jungle. This film is extremely acceptable, if the audience were terzettoyearolds. It has an extraordinarily shallow and predictable plot and the gags and jokes are simply not funny. The actions of the characters are overtly exaggerated and very unrealistic. And the depth of the characters is nearly that of composition cuttings. This movie is designed to be viewed by people with the IQ of under fifty points. And frankly, I feel insulted at being made to find out this forgetful excuse of a study subject.There are, however, silly moments in the film. Like when we see a picture of a powerful, godlike human wavering by means of the jungle. The narrator has built up our expectation of greatness then, suddenly. The hero slams into a tree. This provides the audience with a laugh the first time, but then, as if the scriptwriter ran out of ideas, they prevail doi ng it again and again and again I was trying not to chuck things at the television and video recorder after the fifth time. I mean, its good making the viewers laugh its a comedy, after all. But if I could I would slap the film crew each virtually the give and say Look people, if you cant think of more than one gag for a comedy, try doing a serious film kind of uneasy yourselves in front of millions of people and corrupting their minds George of the Jungle ArtGeorge of the JungleThe film George of the Jungle, directed by Sam Weisman is a romantic comedy and parody. In other words, it fully rips off Tarzan and makes a classic story seem stupid. In the beginning of the film the audience is shown a short cartoon about how George came to be in the jungle. When he was a baby George was flying over the jungle in a plane when it crashed. The passengers never found him and so apes raised him. Then the scene it cut to the present when a woman called Ursula came to the jungle as a tou rist. She meets her materialist fiance Lyle Vanderbrute unexpectedly who wants to get out of the jungle as soon as possible. Lyle drags Ursula to see the apes but then a lion traps them. This is when George appears and saves Ursula by owning the lion in wrestling. Then George carries her off, takes care of her and goes back to the city with her. Then Georges friend, an educated ape, is kidnapped by poachers and George races back to save him. Ursula realises she loves George and goes after George, helps him bash the poachers and lives with him happily ever after in the jungle. This film is extremely good, if the audience were threeyearolds. It has an extraordinarily shallow and predictable plot and the gags and jokes are simply not funny. The actions of the characters are overtly exaggerated and very unrealistic. And the depth of the characters is about that of paper cuttings. This movie is designed to be viewed by people with the IQ of under fifty points. And frankly, I feel insulte d at being made to watch this poor excuse of a study subject.There are, however, silly moments in the film. Like when we see a picture of a powerful, godlike human swinging through the jungle. The narrator has built up our expectation of greatness then, suddenly. The hero slams into a tree. This provides the audience with a laugh the first time, but then, as if the scriptwriter ran out of ideas, they keep doing it again and again and again I was trying not to chuck things at the television and video recorder after the fifth time. I mean, its good making the viewers laugh its a comedy, after all. But if I could I would slap the film crew each around the face and say Look people, if you cant think of more than one gag for a comedy, try doing a serious film instead embarrassing yourselves in front of millions of people and corrupting their minds

George of the Jungle :: Art

George of the JungleThe word-painting George of the Jungle, directed by Sam Weisman is a romanticist frivolity and parody. In other words, it fully rips off Tarzan and makes a classic story seem stupid. In the beginning of the film the consultation is shown a condensed resume virtually how George came to be in the jungle. When he was a baby George was flying over the jungle in a plane when it crashed. The passengers never pitch him and so apes raised him. Then the scene it cut to the present when a woman called Ursula came to the jungle as a tourist. She meets her materialist fiance Lyle Vanderbrute unexpectedly who wants to get show up of the jungle as soon as possible. Lyle drags Ursula to see the apes but then a lion traps them. This is when George appears and saves Ursula by owning the lion in wrestling. Then George carries her off, takes care of her and goes back end to the urban center with her. Then Georges friend, an educated ape, is kidnapped by poachers and George races back to save him. Ursula realises she loves George and goes after George, helps him bash the poachers and lives with him happily ever after in the jungle. This film is exceedingly good, if the audience were threeyearolds. It has an extraordinarily shallow and predictable plot and the gags and jokes are simply not funny. The actions of the characters are overtly exaggerated and very unrealistic. And the prescience of the characters is ab aside that of paper cuttings. This movie is designed to be viewed by people with the IQ of under fifty points. And frankly, I feel insulted at being make to get a line this poor excuse of a study subject.There are, however, silly moments in the film. Like when we see a picture of a powerful, godlike homosexual jive through the jungle. The narrator has built up our expectation of greatness then, suddenly. The hero slams into a tree. This provides the audience with a laugh the first time, but then, as if the scriptwriter ran out of ideas, they keep doing it again and again and again I was trying not to chuck things at the television and video recorder after the twenty percent time. I mean, its good making the viewers laugh its a comedy, after all. But if I could I would slap the film crew from each one rough the face and say Look people, if you cant think of more than one gag for a comedy, try doing a serious film instead embarrassing yourselves in front of millions of people and corrupting their minds George of the Jungle ArtGeorge of the JungleThe film George of the Jungle, directed by Sam Weisman is a romantic comedy and parody. In other words, it fully rips off Tarzan and makes a classic story seem stupid. In the beginning of the film the audience is shown a short cartoon about how George came to be in the jungle. When he was a baby George was flying over the jungle in a plane when it crashed. The passengers never found him and so apes raised him. Then the scene it cut to the present when a woman called U rsula came to the jungle as a tourist. She meets her materialist fiance Lyle Vanderbrute unexpectedly who wants to get out of the jungle as soon as possible. Lyle drags Ursula to see the apes but then a lion traps them. This is when George appears and saves Ursula by owning the lion in wrestling. Then George carries her off, takes care of her and goes back to the city with her. Then Georges friend, an educated ape, is kidnapped by poachers and George races back to save him. Ursula realises she loves George and goes after George, helps him bash the poachers and lives with him happily ever after in the jungle. This film is extremely good, if the audience were threeyearolds. It has an extraordinarily shallow and predictable plot and the gags and jokes are simply not funny. The actions of the characters are overtly exaggerated and very unrealistic. And the depth of the characters is about that of paper cuttings. This movie is designed to be viewed by people with the IQ of under fifty po ints. And frankly, I feel insulted at being made to watch this poor excuse of a study subject.There are, however, silly moments in the film. Like when we see a picture of a powerful, godlike human swinging through the jungle. The narrator has built up our expectation of greatness then, suddenly. The hero slams into a tree. This provides the audience with a laugh the first time, but then, as if the scriptwriter ran out of ideas, they keep doing it again and again and again I was trying not to chuck things at the television and video recorder after the fifth time. I mean, its good making the viewers laugh its a comedy, after all. But if I could I would slap the film crew each around the face and say Look people, if you cant think of more than one gag for a comedy, try doing a serious film instead embarrassing yourselves in front of millions of people and corrupting their minds

Monday, May 27, 2019

Giving Credit Essay

AbstractIn this assignment we compare the lives of two men that workd inventions that changed the world we live in. Throughout their careers and success they had hurdling to jump and become innovative thinkers to create futures in their challenging exertion. Great men and thinkers can move mountains and open paths to new industries.Giving faith Where Credit Is DueAs entrepreneurs in a growing world of technology and change Andy Grove and Michael dell pioneered the industry. They were innovators, visionaries and industry leaders in their fields. Andy Grove visualise change to create faster and more productionive ways to utilize the way we process selective information. As an immigrant he passed through the walls of terror in his home plate country of Hungry which was facing many issues of destruction that the Nazis enforced. He fled to the United States in the 1960s to create a better keep and future for himself. Andy established a small company that he called Intel which cre ated an immense new future for the computer technology.His contributions awarded him the esteem acknowledgement as Time cartridges person of the year. He pursued his dreams as all inventors to create products and progress in the ever-changing need for new technology. Andy Grove created the chip and the microprocessor that allowed information to be processed much faster and smaller in size. Mr. dell also an inventor at an early age set up a laboratory in his dorm room while attending college to create his personal computer.His success with the personal computer caught on quick because he utilise a direct theory to include his customer in the structure of his product. Michael dell conceptualized that understanding his customers ask and issues would enable Dell to create good competitive products. He believed that cutting the middlemen out of the sales market would allow him that advantage. With that direct approach Dell was able to endure many obstacles as well as the recession th at created market crashes in the early 2000s. (Krames, Jeffrey A. What the Best CEOs Know 7 Exceptional Leaders and Their Lessons for Transforming Any Business)Both Andy Grove and Michael Dell endured much criticism from competitorsbecause of their business approach and products and business logic. Mr. Grove instilled paranoia in his corporate CEOs so that they could become fearful of being caught off guard. Mr. Dell use his customers to create standards that would give him the leading edge over his competitors. Both were accused of manipulating the industry and deceiving the product consumers. Intel was accused of creating an inferior product that put fear into the consumer to believe the chips that were being used would create disastrous failures to the products who endorsed his chip. Dell had a simple approach to create sales and marketing by lowering the cost of his product by cutting out the middlemen which do him more competitive and viable to sustain the economic downfalls. His competitors accused him of lowering his cost so as to buzz off the market sales away from the non profitable companies.Both Mr. Grove and Mr. Dell used simple methods to approach the needs and creativity of the growing markets. Andy Grove made sure that being too comfortable with the products that his company offered just wasnt going to be enough. He looked ahead to keep creating new products that would soon be replaced by a growing need for better technology. Mr. Dell never doubted his approach to keep his overhead down and modeled his success on a customer establish product. He believed creating his product for the customers rather than just offering his product without realizing their valuable input. Their similarities are based on what the customer and technology needs to sustain a profitable and sustainable future. They share innovative ideas in product development and customer based satisfaction.They differed by the way each approached the markets, Intel necessary to cre ate a superior product sooner and more often to keep up with changing technologies. Dell used the customer to create products based on the user need. Each approached their core values of business and what works for them to create value and sustain powerfulness. Dell had the market share by using the internet to generate sales through machine to machine based sales and conserved his manpower to keep the cost of his product low. Intel spent many hours of question and product development to create new and innovative ground breaking products that led the industry standards.Mr. Grove believed that being complacent would drive him out of the industry and eventually close the doors. He encouraged his CEOs to listen to his sales team to find out the needs of the customers Andy Grove referred to those individuals as Cassandras that would pass on valuable customer feedback. His pushes to create paranoia made Intel strong and create rather than become comfortable because Grove feared that wa iting for the market to fail made Intel week. Mr. Dell believed that his customers gave him the strength to survive and customize his business to their needs.Dell never overstocked and always built their product to order. Dell realized that over burdening their inventory would devastate their ability to keep costs down. Dell learned from their mistakes like the Olympic brand which compromised the need of customers and their ability to utilize technology threatened the Dell line of products. Dell made efforts to speak to the public and hear what they had to say and put that information into their products. (Krames, Jeffrey A. What the Best CEOs Know 7 Exceptional Leaders and Their Lessons for Transforming Any Business)Both of these pioneers created a wealth of knowledge and bridged the technology industry to what is today. Innovation is the foundation of the industry and creating new and exciting products that can be user friendly and affordable stick profits. Both of these two comp anies are very profitable and determined to create intelligent and appealing industries. The changing markets constantly demand new and better technologies that depart enable us to accelerate computer speeds and retain more memory without absorbing higher prices and engineering.ReferencesAmerican psychological Association. 2010). Publication manual of theAmerican Psychological Association (6th ed.). Washington, DC Author. (Krames, Jeffrey A. What the Best CEOs Know 7 Exceptional Leaders and Their Lessons for Transforming Any Business)

Sunday, May 26, 2019

Love Relationship Among Student Essay

This research study examines the relationship between academic achievement and at-risk students. Many issues right away affect the achievement gap and the ability for at-risk students to succeed. Most data, as revealed in the studies included in this review, conclude the factors identifying at-risk students do have significant impact on the academic achievement of individual students and schools. Most often, these students are not successful and eventually drop out of school or pursue a GED. info indicate that teacher-student relationships, parent or caregiverstudent relationships, motivation, SES, and peer influence can affect success for at- risk students.Twelfth grade students from two high schools in an urban school district were given the opportunity to participate in a survey. This study investigates correlations between the certified variable grade point average (GPA), and the self-governing variables teacher-student relationships, parent or caregiver-student relationship s, motivation, SES, and peer influence. Five regressions were run to determine if any of the independent variables predict GPA.Data from this study indicate that the variance between the dependent variable of GPA and each of the five independent variables is significant however the practicality of these results having a significant influence on the GPA of the study participants is minimal. The strongest variance name was between GPA and motivation and between GPA and peer influence. Other findings include a relationship between GPA and participation in sports or activities. As GPA increases, the percentage of students participating in sports and activities increased. The students in this study do have positive relationships with their teachers have a parent or caregiver encouraging them to do well in school and plan to attend college.

Saturday, May 25, 2019

Book Critique on Sharing Jesus Without Fear

Liberty Theological Seminary Book Critique Sharing Jesus without Fear A Paper Submitted to Dr. Gregory Hammond In Partial Fulfillment Of the Requirements for the Course Contemporary Evangelism Evan 565 Bibliographical Information Fay, William and Linda Evans Shepherd, Sharing Jesus Without Fear, B&H Publishing Group, Nashville, Tennessee, 1999. power Information William Fay, author of Sharing Jesus without fear, sh atomic number 18s his testimony in the beginning of his book. Fay was once president and CEO of a large company.This was not all that he dabbled in as he had ties to illegal activities such as racketeering, bookmaking, gambling, and even ran a house of prostitution. (Fay,1) According to Fay, he felt at this quantify in his life he had all(prenominal)thing that life could offer. The expensive watches, m whizzy, multiple marriages, but this trend in his life did not continue. Fay, eventually came to Jesus savior and went to capital of Colorado Seminary, and now is an eva ngelist that travels throughout the U. S. Content Summary Fay states at the very beginning what the purpose of his work.He states that its objective is to provide liberty and to conformation the recollectr so he erect present the Gospel and not fail. (Fay, Preface) The book is broken down into 11 chapters each dealing with aspects when presenting the Gospel. Fay starts with establishing the fact that that because you present the Gospel and some star does not respond it does not mean that you failed. Success when it comes to the Gospel is presenting the Gospel and living out the Gospel. Fay points to the fact that it boils down to obedience.The bordering couple of chapters, Fay address the issue of not communion the Gospel and common objections and fears that Christians have when presenting the Gospel. In Chapter 2, the author states that we must repent of the sin of silence. Christians, who never debate the Gospel, instead talk about unsaved world, but does nothing about it. In this same chapter the author tries to instill in the believer to see the imaging that Jesus Christ has for the world and for us to share this same meat with the world that stand change their life.Fay advocates there are some Christians that exit tell people they will pray for them and perform other nice gestures, but never share the Gospel at all. Fay stress that believers need to escape from this doctrine and see the need to reach people in their need. The next chapter in Fay work addresses common objections or fears that Christians have when it comes to witness. Fay in Chapter 3 present the skepticism is it the Christian responsibility to share the Gospel if so then why are we not doing so. He goes through several(prenominal)(prenominal) reasons why we might not share the Gospel.Some of the objections are as equals afraid of rejection, what friends might think, and not knowing enough leger knowledge to highlight some. Fay advocates that it is time for the Christian to disgorge the excuses. He also states in this chapter if the Christian wants to see true joy in his life then the Christian needs to start communion his creed with others. (Fay,28) The rest of his book is dedicated to explaining his philosophy and entree when it comes to sharing the Christian faith. Fay takes the next several chapters in particular 4-6 sharing the format which he uses and encourages believers to retrace as well.He starts off in Chapter 4 how to lead a conversation to discuss spiritual matters. He shares several ice relegateers that can be utilise to lead a conversation to spiritual matters. He also shares in this chapter that 5 simple questions can be asked that will centripetal the conversation towards the motif of Jesus Christ. The difference is that the believer is not to argue or defend right away, but to simply listen. The next chapter deals with the power of the Gospel and as one are engaged in a spiritual conversation to introduce scripture, but have th e soulfulness find out it for themselves and not explain it.Let scripture speak for itself and let the Holy Spirit do the work of convicting. His last chapter in regards to presenting the Gospel is to bring the discussion to a decision and asking several much questions about making a decision. The questions are as follows are you a sinner, do you want forgiveness of sins, do you believe Jesus died on the cross for your and rose again, are you willing to surrender your life to Jesus Christ, and are you ready to invite Jesus into your life and into your heart. (Fay, 62-63) At this point in the conversation Fay charges the believer to be quiet and not say anything and let God work.In the final several chapters of Fay book, he addresses common objections raised by those who chose not to receive Christ and how to address those objections. He list 36 common objections as the ones that are the most prominent. Fay also states that when addressing objections to make sure that one asks the question why. If we are quick to defend we might not get the right answer, but rather asking why usually reveals the reason and we can try to address it. The last two chapters of his book deal with having a mixture of Christian and non-Christian friends and how our lives around our non-Christian friends can make an impact.As well, he does disguise how to pray for the lost and follow a simple plan to follow to pray for them using a seven day model. His last chapter deals with more of an incitement to go and start sharing our faith with the lost. His last question he poses before closing the chapter is does the believer talk to the lost or about them which was his briny question in the beginning of his work. Evaluation In examining, Sharing Jesus without Fear one should have some heighten confident in regards to sharing the Gospel.The authors original intend for this book is freedom and confidence when presenting the Gospel. The author takes the approach in his work to start when Ch ristian are at, and charges them to break from silence. Author does relieve the Christians of undue pressure that a belief might feel in regards to witness. He states that we are just obeying Jesus Christ my sharing the message that He left for us. It is not about how many we can stuff into a church or put on a mark of honor like we accomplished the work ourselves. The question is will the believer respond to the call of Jesus Christ.He states that a lot Christians are not following the command of Jesus Christ and a lot of Christians are afraid. The author does not cite any surveys to proof this, but giving the thoughtfulness of society today, it is probably an accurate statement. He does address the common objections or reasons why Christians do not go out and share the Gospel with several scripture passages that address each reason why believers do not share their faith like they should. He also explains that God promises the believer that he will be with us when the believer sh ares his faith.To site another problem that believers have when presenting the Gospel, is how to introduce the topic without being to obtrusive. He gave some examples of icebreakers or shipway to lead a conversation to the desire result of sharing the Gospel. What is unique about the author approach to the topic is his simple, but direct approach to the Gospel. The states the louvre questions he asked acts as guide to get to the heart of the matter which would be sharing the Gospel. He says by asking these questions people are more open then we realize and willing to share their personal beliefs.The comforting point he makes is that Christians are to share and live out the Gospel. The book takes on more of a practical tone. Basically, the whole of idea is that sharing the Gospel does not have to be so complex and overwhelming as it seems as it is do out to be. The author teaches the believer that sharing the Gospel will bring joy that is sometimes missing from the believer life. The believer can share the message of the cross quietly easily without fear knowing that God is there to throw the believer and success of the presentation is not depended on the fact that the person comes to Christ or not.Success for the believer is that fact that he we went out and obeyed Christ and if it results in a person coming to the knowledge of Jesus Christ then that is a blessing. A couple of items outlay highlighting showing some elements that are quite good. The redefining of what success is in terms of witness. The second is the how approach to witness by asking the five questions and just listen and not interrupting which is the hardest thing to do. Another element of his presentation is the use of scripture.Have the person read it for themselves and discover the meaning of the passage by having the Holy Spirit illuminate their minds to the truth of the Gospel. Finally, the common objections that often that do come up are worth taking the time to read, and see if the responds are really helpful when a believer is in a conversation with an unbeliever. Some negative aspects of the book are as follows. First, there were very few references to other materials to support some of his claims. For example, the most common reason believers do not share their faith. He never indicated that he took a poll.Secondly, the author makes an assumption that a Christian can not experience joy unless he shares the Gospel. (p. 27) The author makes this statement, but does not support this statement with other believers testimonies nor with Scripture. The demographic of the person who should read this book is a believer wanting to share their faith, but does not know how. This resource offers precede way in expressing what has changed the believers heart. This could also be taught to an entire church on how to start an evangelism program. This philosophy can be work throughed right away into an every day conversation.A person would probably need to read through th e work possibly twice to fully understand the concepts being presented and to see how to implement them into their daily lives. It would have to become part of them. This book helps clarify a simple way to share the faith. There are a multiple of programs, methods, and merchandise technique that are available to sharing the Gospel these days. However, the approach that was examined in Sharing Jesus without Fear alleviates the pressure that some might feel to follow a system. This system is heavily dependent on the scripture and the work of the Holy Spirit to do the convicting.It really does help me to understand that the Gospel can be shared virtually anywhere without being so in the face of people. This is opening a door to ask someone to share what they believe with the believer and in return showing the true. Instead of saying out from the beginning, you are going to hell. It does take evangelism in a different direction and his fierceness on living out the Gospel. To having a powerful impact in regards to the Gospel one must be living it and this is so important. Having the balance is what will make the difference.

Friday, May 24, 2019

Point of Sale Essay

The point of exchange is the place and time at which a operation owns place. Whenever a vendee and seller come together for the purpose of conducting a transaction, a point of sale is created. Also c aloneed a point of purchase, a point of sale butt joint take a wide variety of forms. The cash register line in a gasoline fueling station is a point of sale, for example, as is the bridle page in an online store. The point of sale can be a salespersons desk in an auto dealership, as a nonher example, as can somewhatones depend porch in a door-to-door sales transaction.Transaction Processing SystemA transaction processing organisation can be defined as a set of policies, procedures, equipment and technology designed to facilitate transactions at the point of sale. Transaction processing systems throw away evolved alongside advances in technology to add convenience, reliability and security to business transactions. Just like the point of sale itself, transaction processing sys tems can take a variety of forms. A cash box and a pad of paper at a lemonade stand is considered a transaction processing system, for example, as is a complex software harvesting package that connects digital cash registers, credit card processors, inventory databases and accounting software.CorrelationFor every point of sale at that place must be a transaction processing system to accompany it. The correlation is so close that software-driven transaction processing systems are often referred to as POS (point of sale) terminals. Different point of sale situations call for different transaction processing systems, and new transaction processing systems emerge to facilitate new point of sale types. An online seller, for example, would be unwise to use a hand-operated cash register to process transactions over the phone instead, online retailers often rely on software transaction processing systems. Other ApplicationsThe point of purchase is an important concept for other marketing disciplines in addition to sales. request of purchase displays in retail outlets use advertising or sales promotions to encourage impulse purchases while customers stand in line, for example. The 21st century has seen the rise of mobile points of sale and transaction processing systems, bypassing traditional cash-register sales models for face-to-face exchange situations. In Apples retail stores, for example, salespeople use smartphone credit-card readers and mobile transaction processing systems to ring customers up wherever they stand. http//smallbusiness.chron.com/point-sale-vs-transaction-processing-systems-17548.html7 reasons to switch to a point-of-sale systemBy Jeff WuorioIf youre a veteran retailer, you know the problem Your inventory doesnt harmonise your tallies. Sales are going unrecorded. Your staff is spending far too much time chasing mistakes instead of tending to customers. Something is seriously wrong, and youre on the button not sure what the problem is. These and other snafus suggest that its time that your business did away with its cash registers and stepped up to a point-of-sale (POS) system, such as Microsoft kinetics Retail Management System and Microsoft Dynamics battery-acid of Sale (POS) . A POS system is a computer software and hardware network that records sales as theyre occurring it solves a variety of operational and record-keeping guideaches. If you need more proof, here are seven signs that your business could boom with a point-of-sale system. 1. Your sudden compress no longer goes undetected. POS systems such as Retail Management System are designed to immediately record any and all sales. Not just now does that mean timely and accurate sales tracking, but a POS system also lets you readily identify inventory levels, particularly when what you arrive on the books doesnt jibe with actual stock.You see it with the onset of sudden shrinkwhen you concreteize that inventory is missing or your numbers just never seem to outfit up, says John Rarrick of RBS Inc., a Nyack, N.Y., consulting concern specializing in startups and small businesses. Almost every modern POS has a receiving and inventory module that, when used properly, can support turn up the cause of the shrink. 2. Mark gobble up management is much easier. A common land mine for many small to medium-sized businesses is price reductionknowing which accompaniments have been marked down and recording those discounts accordingly. Rather than wrestling with cash-register receipts at days end, a POS automates the process of introducing markdowns and, in turn, tracking them accurately. The trends in POS are not just inventory accuracy but the use of pricing models to allow for markdown management, says Gary Ruffing, senior director of retail services for BBK Ltd., a business advisory firm in Southfield, Mich.3. Promotions can be tracked more successfully. A similar dynamic holds true with promotions. Whether through coupons, special discounts or other vehicles, promotions can be primeval to attracting and retaining business. Trouble is, managing and reconciling short-term specialsnot to mention pinpointing their impactcan be nigh impossible without the automation and immediacy of a point-of-sale system. umteen small retailers invest in things such as direct home marketing, Rarrick says. At the end of the promotion, those with manual cash registers are hard pressed to sort you how successful the promotion was. The POS store can pretty much tell you to the penny how they did. 4. You can maintain control in absentia. You may be impress to discover that you actually run two businesses one when youre there and its evil twin when you dont happen to be around. Many trading operations suffer in employee efficiency and customer service when the boss is away.Automating a host of functions via a POS can help boost those areas, no matter where the head honcho happens to be.You simply cant be there all the time, says Jim Melvin , chief executive officer of Siva Corp., a Delray Beach, Fla., company which provides point-of-sale systems to restaurants. A POS lets you have that important level of control when youre not there. 5. Your prices are consistent from one location to the next. Nothing can prove more disconcert than having a customer question why one item has one price at one store, yet a different price at another. If your business operates at more than one location, a point-of-sale system ensures pricing consistency.Even better, a POS system automates overall inventory control, helping to keep stocks in proper balance depending on demand and other factors, which can vary from one location to the next. It really lends itself to a better overall customer lie withthe sorts of things a customer expects when he walks through the front door, says Melvin. 6. You get many tools in a single package. Buying business equipment piecemeal can be pricey.If you find your hinderbook wearing thin from the expense of software and other gear, a comprehensive point-of-sale system may include them in a single package. Most POS systems have add-on modules like payroll time clocks and customer preference databases, says Rarrick. That removes the need for small businesses to invest in elucidate systems for those purposes. 7. You can lick better use of your personnel. Little is more maddening to a business owner than watching his or her staff bogged down with inefficient, futile responsibilities, from double-checking inventory disparities to seemingly endless cash-register reconciliation.Perhaps the greatest advantage to a comprehensive point-of-sale network is the freedom it can afford your personnel to give their energy to what genuinely matters the most helping customers.A good POS allows you to allocate your human resources to the customer service area of the business, Ruffing says. That means they no longer have to be counting, calculating, ordering, and checking cash-register accuracy. htt p//www.microsoft.com/business/en-us/resources/technology/business-software/7-reasons-to-switch-to-a-point-of-sale-system.aspx?fbid=o1kGJp5H1vJSince it opened its doors to the Philippines in December 2000, MINISTOP has always envisioned becoming the leader in the convenience store industry. MINISTOP has made its heraldic bearing felt by being the communitys warmest and friendliest modern combo store. It takes pride in its wide range of quality products, at affordable prices and value-added service.An submission to point of sale softwarePoint of sale software gives business owners a convenient way of checking out customers and of recording sales. It can keep a record of the store inventory, updating it when an order is processed. It can also print out receipts, carry out credit card processing, track customers, etc. Point of sale software eases the flow at checkout terminals, while recording all the information that can help you make better business decisions.Point of sale software allows users to input via keyboard or mouse, and some even have a key signature concealment interface. You can install the software on your checkout register.When checking out a customer you can either input the sales item yourself or use a bar code scanner. The point of sale software will look up the item in the inventory and bring up the price. It can also calculate tax on the item and change for the customer.POS software can print out receipts and reports. Point of sale software makes your business accounting a lot easier by creating reports on inventory, sales, customers, etc. Since it is already recording each sale, it can easily tell you the sales and revenue of the day.Point of sale software can also help with credit card processing. Credit cards are the preferred method of payment. People do not want to carry around cash for all their purchases. Credit card is a convenient method of payment and if you do not have credit card processing, your business can lose some of its c ompetitiveness.Point of sale software receives input from the POS hardware, which is the scanning station for the credit card. The software will process the credit card payment for you. It can check that the card has not expired and is valid. You will need a merchant account for the point of sale software to do its job.POS software is generally voiced to install and easy to use. You will need to know how to update inventory and record a price change for an item. Point of sale software.15 November 2004 final cause Point of Sale for The Brighter brassMost small businesses under estimate the importance of managing their inventory. They do not realize that many headaches and fire drills are caused by the lack of control and knowledge of their inventory. Whether it is a lack of knowledge of the quantity or specs of a certain product, businesses too frequently use outdated inventory systems. Insufficient systems do not allow them to get the most out of their inventory, because when use d properly, inventory management systems allow businesses to make a concise, real time analysis of products and markets that help them make better business decisions. Inventory management systems also allow businesses to better serve their customers since they keep a enlarge and accurate record of purchase histories and trends so they can reorder products more efficiently. With a controlled inventory, management will be notified when products need to be rendered, are selling quickly or are disappearing due to theft.In essence, the business becomes organized and by controlling inventory, profits can increase. Inventory management allows businesses to make smart and informed decisions about promotions and specials since they are better able to monitor rate of turn for their merchandise. In addition, they let management know when a product is no longer profitable. Products are the heart and soul of a business. Even with the best customer service, they will not be profitable without a goodness to sell. It is the purpose between the business and its customer. It was interesting to hear from Kelly ODonnell, an owner for The Brighter Side, tell that her company does not use any inventory control whatsoever. The Brighter Side spends thousands of dollars on merchandise but does not systematically control how the products are doing or how much is left. During our interview I said to her, Do you send your girlfriend to school to learn and not see how her..Chapter 2 Review of Related LiteratureThis chapter tackles about the related literature of this certain system and also the supposition of the author about her system.Related LiteratureThe hardware of a POS system is also distinctive and important. A typical system includes a display screen for the clerk, a customer display, a cash drawer, a credit card swiping system, a printer, and a bar code scanner, along with the computer loaded with the POS software. practise features may be added or removed, depending on the industry. A restaurant POS system, for example, may have a feature which prints order tickets directly in the kitchen, or a grocery store may have an integrated scale for weighing goods. Early electronic cash registers (ECR) were controlled with proprietary software and were very limited in function and communications capability. In August 1973 IBM announced the IBM 3650 and 3660 Store Systems that were, in essence, a mainframe computer packaged as a store dominance that could control 128 IBM 3653/3663 point of sale registers.This system was the first commercial use of client-server technology, peer to peer communications, Local Area Network (LAN) simultaneous backup and inappropriate initialization. By mid-1974, it was installed in Pathmark Stores in New Jersey and Dillards Department Stores. Programmability allowed retailers to be more creative. In 1979 Gene Moshers Old Canal coffee shop in Syracuse, New York was using POS software written by Mosher that ran on an Apple II to take customer orders at the restaurants front entrance and print complete preparation details in the restaurants kitchen. In that novel context, customers would often proceed to their tables to find their food wait for them already. This software included real time labour and food cost reports. In 1986 Mosher used the Atari ST and bundled NeoChrome paint to create and market the first graphical touch screen POS software.

Thursday, May 23, 2019

Labor Relations Essay

1. Define the term embodied talk terms and list and describe four issues that argon mandatory components of a corporate poting agreement. Collective bargaining can be defined as the process of involving vocalisms from both employers and employees to come to terms and conditions of concern that both parties agree. These agreements be pen into legally binding contacts good for one to five years. (Budd, 2009, p. 229) Four issues that be mandatory components of corporal bargaining agreement are compensation, personnel policies, employer rights and responsibilities. Compensation would involve wages, benefits, vacations, holidays, and profit sharing. Personnel policies refer to layoffs, promotions, and transfer policies. Employers rights and responsibilities acknowledge but non limited to seniority rights, theorise standards, watchfulness right, just cause, safety standards, and discipline and discharge, (Budd, 2009, p. 13) Employer rights and responsibilities is a component of collective bargaining is illustrated in an obligate by Aaron Kuriloff.According to this article the NFL position is that the NFL Players standoff isnt bargaining in good faith, using delays to run out the clock on talks before disbanding the substance and suing the league under antitrust faithfulness for colluding to restrict pay (Kuriloff, 2011 ) The content Football League has asked the national moil Relations bill of fare (NLRB) for clarifications in using antitrust laws to block a lockout and clear up if the case Football League Players Association is a certified labour heart and soul. The National Football League position is that the National Football League Players Association is using delaying tactics and they are threatening a work hobblepage. The conterminous a component of collective bargaining I found in an article by Howard Beck of the New York Times deals with compensation. The National basketball Association is also facing a collective bargaining agre ement (CBA) Mr. Becks states, that the Owners are proposing a innate overhaul of the N.B.A.s economic system, including a hard salary cap, shorter contracts and a 38 percent reduction in player salaries (about $800 million (Beck, 2011) The owners want to lop salaries because 17 out of the 30 franchise teams are losing money at a sum of $300 million a year.While the National Basketball Players Association disputes the leagues figures because Attendance is up, the league is on pace for its highest viewership of all other professional sports. To conclude thesecollective bargaining agreements of both the NFL and the NBA can be categorized as cosmos distributive bargaining. Both parties are going either win or lose several(prenominal) concessions to remain a viable and profitable organization.2. List and discuss three U.S. laws that support collective bargaining, and three examples of employer unfair cut into practices. The three laws that support collective bargaining between emplo yers and labor unions are the National advertize Relations act as (NLRA) of 1935, the Labor-Management Relations cause of 1947, and Labor-Management Reporting and Disclosure Act of 1959. The National Labor Relations Act (NLRA) of 1935, which is also known as the Wagner Act, made it legal to gradation unions and engage in collective bargaining. The Wagner Act created a labor purlieu to equalize the bargaining power between the employer and employees as stated by this text the policy of the United States to eliminate the causes of certain substantial obstructions to the free pay heed of commerce and to mitigate and eliminate these obstructions when they have occurred by encouraging the practice and procedure of collective bargaining (National Labor Relations board)The main purpose of the Wagner Act was to encourage collective bargaining in the private sector by protecting workers rights to join and variance labor unions (Budd, 2009 , p. 119) Furthermore, this act also gave mor e sumptuous powers to the federal government with the regulating of labor relations and it banned employers from punishing workers for using their collective bargaining rights. Americans did have the right to join unions and strike, prior to the portraying of this law. Previously, employers had been free to spy on, to question, to discipline, to discharge, to terminate, and to blacklist employees for either joining unions or striking. According to the website Infoplease.com the Taft-Hartley Act amended much of the National Labor Relations (Wagner) Act of 1935, the federal law regulating labor relations of enterprises engaged in interstate commerce, and it nullified parts of the Federal Anti-Injunction (Norris-LaGuardia) Act of 1932.The act established control of labor disputes on a rude(a) basis by enlarging the National Labor Relations bestride and providing that the union or the employer must(prenominal), before terminating a collective-bargaining agreement, serve reflexion on the other party and on a government mediation service. The government was empowered to obtain an 80-day injunction against any strikethat it deemed a risk of exposure to national health or safety. (Taft-Hartley Labor Act, 2011) The Labor-Management Relations Act provided the government far more oversight over union activities, including the right of the U.S. president to stop a strike if it was deemed dangerous to national health. The act also stripped unions of their power in several ways, including forbidding unions from contributing to political campaigns and only allowing unions to mug up subsequently a volume vote by employees. Although President Truman vetoed the act, it passed easily over his veto, and this act remains the heart of U.S. labor law. The Labor-Management Reporting and Disclosure Act of 1959. Also called the Landrum-Griffin Act, this law amended the Taft-Hartley Act to protect the rights of union members within their union and imposed new reporting requir ements and codes of conduct on unions and employers. This was act created in response to the surge of corruption from various labor union officials who used violence as a way to quail the union opposition from employers and employees. some other process of the Labor-Management Reporting and Disclosure Act of 1959 was to stop labor unions from be infiltrated by communist. Furthermore, former members of the Communist party and former convicts were prevented from holding a union office for a period of five years after resigning their Communist party membership or creation released from prison. (infoplease.com, 2011) Three examples of unfair labor practices include firing a union supporter or someone trying to form a union, Failing to bargain in good faith, threatening to employees with job loss or demotion or physical harm if they support a union and preventing employees from talking about a union or wearing union buttons when it doesnt interfere with their work duties or customers. T he National Labor Relations Board which is an independent federal agency devoted to conducting representation elections and adjudicating unfair labor practices (Budd, 2009 , p. 124) I want to discuss a news article written by Chris Sieroty, who writes for the Las Vegas Review-Journal.Mr. Sieroty details in his article about the labor unrest being experienced in Las Vegas Nevada concerning the allege discrimination against employees based on their national origin. The protesters also urged set Casinos to support the unions efforts to establish a new standard when it comes to alleged discrimination against employees based on national origin. (Sieroty, 2011) Therefore, the labor union, the Culinary Local 226is attempting to unionize nearly 13,000 workers at the 18 hotel-casinos operated by Station Casinos in Southern Nevada. Station Casinos has been charged with using threats, intimidation, surveillance, bribery, discrimination and other illegal activities against employees engaged in lawfully protected union activities. The National Labor Relations Board alleges that for approximately seven months Station Casinos has used this illegal tactic to divide and conquer in union busting strategy by not supporting an anti-discrimination policy.The article I want to discuss is an article by Steven Greenhouse. Mr. Greenhouse writes for the New York Times and his article was about the illegally firing an employee after she pinkd her supervisor on her Facebook page. The action falls under unfair labor practices of firing a union supporter or someone trying to form a union. The National Labor Relations Board standards in to clarify the statute that a worker could not be fired because they criticize their employer under the National Labor Relations Act. The National Labor Relations Board states a example of it clarifying statement, That act gives workers a federally protected right to form unions, and it prohibits employers from punishing workers whether union or nonunio n for discussing working conditions or unionization. The labor board said the companys Facebook rule was to a fault broad and improperly limited employees rights to discuss working conditions among themselves. In summary if we are guaranteed freedom of mother tongue, should always trump business restrictive policies on speech (Greenhouse, 2010)3. Describe the process of establishing and decertifying a collective bargaining unit in the body of work. Initiating an Organizing DriveThe first step in establishing a union in the workplace is to begin by initiating an organizing drive. There are three thinkable initiators of an organizing a drive one or more employees, a union, or an employer (Budd, 2009 , p. 188) Then you must first find out if your co-workers want to form a union by gauging their interest by quietly talking to a few trusted co-workers who you think may be interested in improving the workplace. Create a representative group of co-workers, normally called an Organiz ing Committee to make sure your efforts to form a union succeed. The Organizing Committee educates fellow workers about the benefits of unionizing and your rights under the law. The Organizing Committee should consist of people from all(prenominal) department in your workplace and should be representative of all races, genders, and ethnicities. The committee then should gather an employee list, as well as information about your employer. (How To Organize A Labor Union At Your Workplace, 2010) construction and Documenting SupportThe countenance step in establishing a union in the workplace is to begin building and documenting support. You must document a minimum of 30% of your fellow employees who have to shown interest in forming a union at your workplace. This next step is most likely accomplished by the signing of potence Cards or simply A Cards by the employee. By virtue of your signature, the A Card signifies that you desire for the union to represent you for the purpose of co llective bargaining. However if you garner more than 50% of the workplace showing interest in being be by a union you may beseech that the employer see your union. Subsequently if the employer refuses to voluntary recognize the labor union, in that respect are alternates to be recognized by the employer available.Alternates to Voluntary RecognitionThe third step in establishing a union in the workplace is to begin using alternates to voluntary credit. After the majority of the employees have decided to join the union, your employer will either recognize the union or refuse to recognize it. The alternative for a union to recognize by an employer is by launching a recognition strike. A recognition strike is a strike used by employees to make an employer recognize their labor union. This strike cannot last more than thirty days without the risk of being replaced. The Landrum Griffin Act created alternative to strike for union to be recognized by filing a petition with the National Labor Relations Board (NLRB), to hold elections to certify the labor union. The board will then decide who is eligible to vote and they will schedule the election.File Election PetitionThe next step can be either third or fourth step in establishing a union in the workplace, this step is done by filing a petition with the National Labor Relations Board (NLRB), to hold election. You must request the National Labor Relations Board (NLRB), which is an dispassionate governmentagency, to hold a secret ballot election. Once it is determined that the bargaining unit is appropriate and that no supervisors or management are included, a date will be set by the NLRB for the election, usually 5 to 7 weeks out.Hold National Labor Relations Board ElectionsThe next step can be either fourth or fifth step in establishing a union in the workplace, this final step is done by workers in favor of the union. The pro-union worker will have to campaign to keep pro-union workers and take steps to win over any workers who are against the union. If the union wins the election, by law the employer must recognize and bargain with the union. The National Labor Relations Board is responsible for setting up polling places, usually on the employers property. The National Labor Relations Board is also responsible and supervising the election. By casting a paper ballot into a ballot box is the usual median that employees vote. At the end of the voting period the polls are closed and the ballots are counted right on the spot. The union must win the majority of the votes to be declared the winner. The opposite of a certification election is a decertification election. This type of election is used to determine whether a majority of unionized employees no longer wish to be represented by their union (Budd, 2009 , p. 192) To request such an election, at least 30 percent of the employees must file a decertification petition asserting that the currently certified union no longer represents the empl oyees in the bargaining unit before it can be considered by the National Labor Relations Board (NLRB).To decertify a union, the union representation must have been effect for more than a year and the decertification petition has to be filed during a timeframe of 60 to 90 days before the expiration on the union contract, although healthcare workers are afforded addition time for decertification and that window is 90 to one hundred twenty days prior to the expiration of their union contracts. According to Ohio Hospital Associations information on the decertification process it is regarded as The general rule for unions with a negotiated contract in place is that a decertification petition can only be filed 60 to 90 days prior to the expiration of the contract (or every three years, whichever comes first). For health care employees, this window is 90 to 120 days prior to the expiration of the contract (Ohio Hospital Association) The National Labor Relations Board require that alldecer tification is free from managerial influences , and that all signatures on the petition were collected during non-work time and off the worksite.After the National Labor Relations Board verifies the signatures on the decertification petition, a decertification election is scheduled in approximately 60 days. The union will be decertified if a majority of the members vote against being represented by the union as it bargaining unit. In an article by Aaron Kuriloff he quotes the NFLs position that the NFLPA is using decertification as a tactic to need a better labor contract. The NFL said the unions threat to decertify is a ploy and an unlawful subversion of the collective bargaining process, there being no evidence whatsoever of any (let alone widespread) dissatisfaction with the union by its members (Kuriloff, 2011 ) Free agency was created when the union was decertified after the 1987 strike. The NFL owners just want the NFLPA to bargain in good faith and the NFLPA also want the sa me with more revenue sharing.4. Describe the process of administering a collective bargaining agreement (CBA) to include the role and function of an arbitrator. What are the issues, and how are they handled? Through the process of collective bargaining, employers and unions negotiate terms and conditions of employment and put these terms in a written contract, also called collective bargaining agreements. (Budd, 2009 , p. 229) During the process of administering a collective bargaining agreement the employer and union are obligated to meet at causaable clock to negotiate in good faith about mandatory bargaining items. Mandatory bargaining items are wages, hours, vacation time, insurance, safety practices and the terms and conditions of employment. According to the National Labor Relations Act if either party to refuses to bargain collectively with each other, it is considered an unfair labor practice, however parties are not forced to urinate an agreement or make any allowances.T he collective bargaining process comprises of five core steps Prepare This phase involves composition of a talks team. The negotiation team should consist of representatives of both the parties with adequate knowledge and skills for negotiation. In this phase both the employers representatives and the union examine their own situation in order to develop the issues that they believe will be most important. The first thing to be done is to determine whether there is actually any reasonto negotiate at all. A correct understanding of the main issues to be covered and intimate knowledge of operations, working conditions, production norms and other relevant conditions is required. discourse Here, the parties decide the ground rules that will guide the negotiations. A process well begun is half done and this is no less true in case of collective bargaining. An environment of mutual trust and understanding is also created so that the collective bargaining agreement would be reached.Propo se This phase involves the initial opening statements and the possible options that exist to resolve them. In a word, this phase could be described as brainstorming. The exchange of messages takes place and opinion of both the parties is sought. Bargain negotiations are easy if a problem solving attitude is adopted. This stage comprises the time when what ifs and supposals are set forth and the drafting of agreements take place. Settlement Once the parties are through with the bargaining process, a consensual agreement is reached upon wherein both the parties agree to a common decision regarding the problem or the issue. This stage is described as consisting of effective joint implementation of the agreement through shared visions, strategic planning and negotiated change. (Collective dicker Process, 2007) According to the website Industrial Relations Home Collective Bargaining Process the collective bargaining process comprises of five core steps in which are Prepare, Discuss, Prop ose, Bargain, and Settlement.The first process of preparing is getting your team together the second step is discussing the grievances, common concerns and goals the third step is proposing the methods on how to solve grievances, common concerns and goals the fourth step is bargaining to reach an agreement that all parties can endorse by the final step is making a settlement on the terms of the contract.WORK CITEDBudd, J. W. ( 2009 ). Labor Relations Striking a Balance. New York McGraw-Hill. MLBPA. (2014). business relationship of the major league baseball players association. Retrieved from http//mlb.mlb.com/pa/info/history.jsp Kuriloff, A. (2011). NFL Files Unfair-Labor Practices Complaint Against Union in Contract Talks. Retrieved February 19, 2011, from Bloomberg http//www.bloomberg.com/news/2011-02-14/nfl-files-unfair-labor-practice-char

Wednesday, May 22, 2019

Five Moral Dimensions Of The Information Essay

1.The Moral Dimensions of Information SystemsThe moral dimensions that give notice control the major ethical and cordial concerns generated by information systems are as follows(i) Information Right and ObligationWhat information rights do individuals and organizations possess with respect to themselves? What can they protect? What obligation do individuals and organization have concerning this information?(ii) Property Rights and ObligationsHow will traditional intellectual property rights be protected in a digital society in which tracing and accounting for possessorship is difficult and ignoring such property rights is so easy?(iii) Accounting Liability and ControlDetermining who should take responsibility for decisions and actions. Many of the laws and courtyard decisions and actions establishing precedents in the area of accountability, liability and control were firmly in place long before information systems were invented.(iv) graphic symbol of SystemThis has to do with da ta flavor and system errors. As werely more on information systems, data quality issues are gaining more importance. These issues affect you as a consumer and as a user.(v) Quality of LifeAn interesting quality of biography issue that affects more and more people personally is the ability to work from home. Before the advent ofinformation systems near people used to have a regular day bank line 8.00 a.m. to 5.00 p.m., five days a week in a typical part setting in our society. But with the introduction of information systems people can work seven days a week, all hours of the day, at home and on their wayse specially the management staff in a company. Also, the quality of life issues would be incomplete without mentioning online love affairs. People also lose their jobs and ways of life because of information systems. All these are valid concerns of information systems.2.Ethical AnalysisThis section presents various step processes of how one should analyze ethical concerns when confronted with such asituation(i) Identify and intelligibly Describe the FactsThis involves finding out who did what to whom, and where when and how. In most cases, you will be astonished of them is takes in the initially reported facts, and you will find that simply get the facts straight helps in defining the solution. Also, this assists other opposing parties involved in an ethical quandary to agree with the facts. (ii) State the Inconsistency and Identify the Higher-Order Values come to The parties involved in disputes over ethical, social and political concerns always claim to pursue higher values such as privacy, freedom and protection of property. It is genuinely important to clearly define the conflict in ethical concerns and identify the ones with higher values.(iii) Identify the StakeholdersYou must find out the identity of the stakeholders as any ethical, social and political issues have stakeholders players in the game who have an interest in the outcome and that h ave invested in the situation and what they want.(iv) Identify the Reasonable Options to assumeIt may be discovered that none of the options may ever satisfy all the interest involved while some of the options performs a better job than others. So, sometimes concluding at a good or ethical solution may not always be a balancing of consequences to stakeholders.3.Property rights and obligations apportion secrets copyright patent lawTrade secrets are any intellectual work or product used for a business purpose that can be classified as belonging to that business. Copyright protects the creators of a property against copying by others for any purpose during the life of the author. Patent law grants the owner an exclusive monopoly on the ideas behind an invention for 20 years.

Tuesday, May 21, 2019

Analysis of “1954” by Sharon Olds Essay

1954 by Sharon Olds is a poem displaying the horrors of an instance of rape and murder of a young girl by a homophile named Burton Abbott in 1954. Olds uses a frantic and horrified tone highlighted by a careful choice of phraseology to prove her messages that any ordinary-looking person can disguise evil and the current justice system has a hypocritical eye-for-an-eye mindset that only annihilates up destroying human life.The structure of 1954 is built on enjambment and disconnected sentences. This helps the reader understand business concern the verbalizer feels, as if words are simply pouring out, evolution the frantic and horrified tone of the poem. This fear builds as the speaker begins to figure out connections between the victim and herself. The antecedent uses clear imagery in phrases like I feared the word eczema, like my acne and like the X in the paper which marked her body to help make these connections. The speaker relates the victims eczema with her aver acne, and recognizes how an innocent, microscopical girl has been reduced to nothing but an X that marked where her lifeless body was left. Now that the speaker can relate to the victim in a clear way, she begins to realize how ordinary the murderer was.The beginning uses simple, ordinary expression to describe him. Phrases like as if he were not someone specific, his face was heavy and ordinary, and he looked almost humble are examples of the authors use of ordinary diction that make the killer seem normal. The speaker then says the killer went against what Id thought I could regard on about evil. This helps support the message that evil can be disguised in anyone because by making the murderer seem ordinary, the author forces the speaker and the reader to begin to question the people well-nigh them.A definite shift occurs in line 22 of the poem. The author shifts from using the word fear to the word pity when referring to the crime, and begins to use fear to describe how the spea ker feels towards consequences the murderer, Burton Abbott must face. The speaker realizes that the good people, the parents were handout to fry Mr. Abbott on the electric chair for his crime. The author deliberately used the word fry to express that the parents of the victim did not just believe that Abbott should receive capital punishment, but they cute him to suffer they wanted to watch himwrithe in pain for what he did to their daughter. As a result, the speaker begins to fear electricity, and her lets electric blanket. The author uses this and other carefully chosen phrases like death to the person, death to the stand planet to demonstrate the hypocrisy that exists in the justice systems eye-for-an-eye mentality when it comes to capital punishment. When someone commits a murder, they are sentenced to death, simply resulting in further loss of human life. People who see these crimes in the news not only fear the murderer they fear the brutal punishment just as much, demonst rated by the speakers new fear of electricity.The author uses carefully chosen diction and tone to communicate two completely different messages to the reader. Both of these messages come together at the end of the poem to pose a single, lingering question to the reader Who should we fear more? The murderer, or our own justice system?

Monday, May 20, 2019

Drug Abuse in Inner Cities Essay

Inner- urban center argonas ingest become the primary location for minorities, and the easiest derriere to find sinful doses. Evidence shows that there is a link between the increase of illegal medicate implement, and the increase of minorities living in intragroup-metropolis communities that atomic number 18 unemployed or collect welf be. Bruce D. Johnson states medicine horror in the Inner City Impact on Hard- medicate Users and the Community and Illicit drug intake in the inside(a) city expanded rapidly in the mid-sixties and has keep unabated into the 1990s (9). Johnson also writes During the compass point 1960-80, the number of persons living in communities primarily occupied by low-income (including welfare and unemployed) blacks and Hispanics approximately doubled (10). The 2 previous quotes provide evidence that illegal drug use and minorities living in inner city communities have both change magnitude over time. Minority drug vilification in the inner city results in the organization of drug dispersion systems, which can cause violence that negatively affect families. drug abuse is a line of work in inner cities, and has been for a long time. During World War II factory workers were necessary in order to meet the needs of the United States Army. Between the 1930s and 1940s, with the bulk of those factories located in the North, a large group of Southern African Americans migrated to the Northern states in essay for jobs. The low-w progress factory jobs that African Americans and opposite minorities occupied forced them to reside in the ghettos. According to, Drug hatred in the Inner City Impact on Hard-Drug Users and the Community Johnson states that Prior to 1940, virtually 20 pct of those arrested for narcotic law were black, a figure that increased to over 50 pct by the mid-1950s (12). Johnson provides teaching that shows the migration of African Americanssparked minority drug abuse at bottom inner-city communities. In the 1950s, minorities use of illegal drugs began to increase, and have continued to into present day.The nigh dramatic increase in the use of drugs within minority communities occurred in the 1960s and the earlier 1970s. During that time period, many events took place that tincted drug abuse in the inner citys minority communities. Johnson writes Heroin use and addiction, particularly among minorities in the inner-city neighborhoods, exploded during the period 1965-73, (14). This quote shows the exceedingly addictive drug many minorities between the years 1965 to 1973 abused heroin. In the inner-city communities, those who used heroin to the highest degree likely tried it for the first time between the ages of 15 and 21. Heroin is a highly addictive drug, and about half the users who try it are prone within two years, (14). Johnson states that The heroin generation of youths who became addicted in 1965-73 is seeming(a) in the black community in virtu solelyy every city with a p opulation over 100,000 (14). This quote elicits that it was common for minority communities to have a serious drug abuse problem, and that minorities were responsible for the popularity of heroin in the inner cities.Heroin was not the only drug abused as the popularity of drug use continued to increase. In 1975, cocaine became very popular in within minority communities throughout the city, and remained very popular until 1984. The amount of cocaine users began to decline due to the rise of another drug, crack. It is evident that if inner-city minority drug abuse continues to be neglected, no matter what illegal drug it is, it will imbibe popularity and users will abuse the illegal substance. Minorities are not only the mass of users they are also the majority of distributors. In bleak York, African Americans and Puerto Ricans of the inner city communities often bought kilograms from the Italians, (18). Johnson writes At the lower levels of the heroin diffusion system, heroin u ser-dealers would generally be advanced several bags of heroin to shift they would use some and sell enough to pay their supplier in order to re-up (18). This quote shows that the lower-level minority distributors would abuse the drugs advanced to them, by selling some and development the rest. Drugs in the inner city are in constant demand.Since drugs are in constant demand a complex system is needed to establish consistency in the process ofmaking the drugs, so they will always be available. The drug distribution system is impoverished down into five major roles the five roles are low-level distributors, sellers, dealers, traffickers, and growers. (19) Historically minorities in the inner-city communities play capacious roles in all 5 of these categories. Every level is expected to provide a certain level of production if the level of production is not met then consequences occur. Not only was heroin a problem amongst the inner-city minorities, in the 1980s, crack emerged as a nother very popular drug on the streets. The Drug Enforcement administration reported that four major minority groups all controlled crack trafficking Jamaicans controlled the east coast and Midwestern states Haitians controlled Florida and within two-hundred miles of Washington D.C. Dominicans had control over New York and Massachusetts and Black street gangs had control over or so of the West Coast and western states. (22)Bruce D. Johnson states that Newspaper reports and New York City police suggest that American blacks look several local crack-selling groups in Brooklyn, Queens, and other boroughs(22). Johnson suggests that African Americans, who also have distributors in Detroit, Washington D.C., Chicago, and Los Angeles, are the primary distributors of all the minority groups. Ethnic groups for all of the roles of distribution remain unclear, but based on evidence from many sources minority groups control virtually of the distribution process. The abuse of drugs has had a huge impact on crime rate in America. Bruce states In 1960, probably less than 5 percent of the summarize population, and probably less than a quarter of the criminal underclass, had ever used any type of illicit drug, (40). This quote shows that when drug use was not popular, crime rate was lower. As the demand for drugs increases, and different distribution groups form, competition for turf results in violence.Drug dealers are in constant competition with each other to see who can make the most money, throw the best parties, and who can be with the most gorgeous women drug dealers are relentless in proving themselves. Johnson writes, Hard-drug sales have dramatically strengthened the subculture of violence. Old patterns of using violence and its threat to obtain money vie crime, and to defend masculinity, have been further transformed, (27). This quote supports the conception drug dealers will do anything to accomplish their goals. Drug dealers regularly use violence to a prov e point. With the rise of a variety of drugs in the inner-city, crimerate also began to increase in America. Drug abusers adopt to the organization of illegal drug distributors that commit violent crimes in order to satisfy their esurience they also take part in activities that negatively affect themselves and their loved ones. Drugs can affect relationships, psychic and physical health, and sometimes lead to very serious crimes. In fact, peer-pressure has a huge effect on decision making within a group of friends. In the article Interactive and Higher-Order Effects of kind Influences on Drug UseAlan W. Stacy writes Social influences may show not only linear or interactive effects on drug use, but in some instances may show an intensify (concave upward) effect on behavior as social pressure to use drugs is increased. (229) This quote states that an various(prenominal)s environment and the people around them can increase the possibility to use drugs leading us to believe that m inorities in the inner-cities, living in highly-populated communities, have a greater chance to be socially influenced to drug use. A study done showed that out of a hundred opiate abusers, forty-eight never matrimonial twenty-five married, one widowed, twelve divorced, and thirteen separated. (645) This study shows that abusing a drug affects marital status among drug abusers. Almost half of the opiate abusers never married, and a quarter of them married, but either separated or divorced. married status has a huge impact on African American children living in inner city.Johnson writes The chance that a black child will experience poverty is almost 90 percent if he or she lives in a family headed by a single woman under the age of thirty (10). This quote states that marital status has a huge impact on the life of African American children. Not only does drug abuse affect family situations in the inner-cities, it also affects inner-city residents health.Drug abuse is most common wit h minorities in inner-city communities, and poor-health is most common within these communities. Studies have been done to see if drug use relates to any specific disease. Johnson writes the studies strongly suggest that heroin abusers constitute a substantial portion of all reported cases of the following conditions hepatitis B, endocarditis, pneumonia, and trauma from assault. (50) Johnson provides is evidence that those who abuse the drug heroin have a greater chance of being diagnosed with hepatitis B, endocarditis, pneumonia, and trauma from assault.Not only can drug abuse lead to poor-health and diseases that can be life threatening, it also canlead to drug related homicides. Johnson states that In New York City, estimates of the proportion of homicides which were drug related have increased from about 24 percent in 1984 to about 56 percent in 1988. (51) Johnson reveals that in just four years the increase in the use of drugs has also increased in the amount of drug related ho micides.The psychopharmacological variety, homicides that occurred when an individual was heavily drunk by alcohol or heroin or while experiencing paranoia from a large dose of cocaine, was the most common of all homicides in New York City, which took place in twenty-five percent of homicides. (51) The abuse of illegal drugs can lead to fatal events these fatal events have affected minority families in inner cities as hard, if not harder than any other group of people. Johnson writes Between 1970 and 1985, the proportion of black children living in mother-only families increased from 30 to 51 percent. Johnson strongly shows that a little more than half of black children have grown up without a father.Ever since illegal drug use became popular in the early 1900s, minority inner-city drug abuse has continued to grow. Many things have an impact on who distributes and uses the drugs, along with where the drugs are popular drugs are very abundant in inner cities, because of social and e conomic issues, minorities tend to be the distributers and users of the drugs. The majority of crime and violence in inner cities can be associated with drugs. Drug abuse along with the crime and violence that come with it has sabotaged many minority inner-city relationships with friends and families. Minorities who abuse drugs in the inner cities have created a very dangerous lifestyle for themselves and those around them.Works CitedBruce D. Johnson Terry Williams, Kojo A. Dei and Harry Sanabria, Drug Abuse in the Inner City Impact on Hard-Drug Users and the Community, Crime and justice13 (1990) 9-67. JSTOR. Web. 3 November 2014.Richard R. Clayton, The Family and Federal Drug Abuse Policies. Programs Toward Making the Invisible Family Visible, Family Policy (Aug., 1979) 637-647. JSTOR. Web. 3 November 2014. Stacy, W. Alan. Interactive andHigher-Order Effects of Social Influences on Drug Use. Journal of Health and Social Behavior 333 (Sep. 1992). 226-241. American Sociological Assoc iation. Web 31 October 2014.

Sunday, May 19, 2019

Marriage and Fundamental Constitutional Right

Anti-nepotism rules in the United States date back to the turn of the century however, since the early 1970s, in that location have been numerous legal challenges to such policies and regulations. Often, the plaintiffs are professionals who have been denied employment, transferred or even dismissed because their spouses already worked for the same organization or because their spouses were promoted to supervisory positions over them. These plaintiffs contend that they have a legal properly to work with their spouses, that anti-nepotism rules are discriminatory against them and that such rules violate their constitutional powerful to marry.What are the legal liabilities of governmental agencies and officials in this emerging area of public personnel law? An analysis of recent federal and affirm court decisions revealed that most judges do not interpret anti-nepotism rules to be either discriminatory or a direct violation of a fundamental constitutional right. The kind of rule at homecoming does not appear to be a factor in judicial opinions. For example, federal constitutional right to marry cases cover a variety of situations, including rules against one spouse supervising the other, and policies against hook up with couples running(a) in the same governmental department.Federal judges have subjected all anti-nepotism rules to only minimal scrutiny, deferring to management in virtually every instance. (1) Management Rationales for Anti-Nepotism Rules Both anti-nepotism rules and merit system regulations seek to protect the competency of the workforce, yet, paradoxically, competent job applicants are often turned away, and valuable employees are frequently transferred or even open fire because of anti-nepotism policies.Poor performance is rarely the issue in such cases. (2) Rather, most organizations restrict married co-workers to most degree because of an assumption that the family is a potentially disruptive influence In the workplace. (3) According to Kanter, the main resolve for having anti-nepotism rules is to minimize the influence of traditional familial authority structures, such as that of husband over wife, on the development and management of rational bureaucracies. 4) While such attitudes are changing among some managers, the belief persists that married individuals will bring their quarrels to work, form coalitions to advance their own interests, and in other ways undermine organizational productivity and morale. (5) One survey of university department chairs revealed a deep ambivalence about hiring faculty couples. (6) disdain criticisms by some commentators that anti-nepotism rules are anachronistic, especially for professional couples, most organizations continue to prohibit close working relationships between family members. (7)

Saturday, May 18, 2019

Psychological Causes of Depression

mental Causes of Depression The actual causes of embossment argon still unknown today solely there be a few theories that could help explain them. It is widely believed by psychologists and scientists that all mental unhealthinesss are brought closely by a complex correlation of psychological, biological, and social factors. A serious loss, chronic illness, relationship problem, form stress, family crisis, financial setback, or any unwelcome life change batch ignite a depressive disorder (Psychologyinfo. om). Depression is a serious disorder in the linked States and has only become more normal among individuals as their lives become more stressful and overwhelming. One theory for the cause of depression is the bio-psycho-social model of motive and is the most commonly recognized theory for the cause of disorders such as depression by professionals. As stated earlier, it consists of a complicated correlation of psychological, biological, and social factors.This can be caused by fluctuated levels of hormones, which would explain wherefore many people first experience depression during puberty. (Grohol, 2006). The exact causes of depression are vast and unknown. some(a) types of depression have been launch in families from generation to generation, which may possibly suggest that it can be inherited (Grohol, 2006). With that said, major depression seems to be present generation after generation, in some families, but not with a frequency that suggests clear biological causes.Furthermore, it also occurs in people who have no family history of depression (Psychologyinfo. com). This is also found in people with bipolar disorder. A study on family members that belong to particular families of each generation that develop bipolar disorders, has found that those with the illness have a somewhat different genetic organisation than those who do not have the disorder. Nonetheless, not everybody with the genetic makeup that is subjective to bipolar disorder wil l contract the disorder.Additional factors such as stresses at home, work, or school, are also involved in the disorders onset (depression-guide. com. , et al. ). An assortment of psychological factors appears to play a part in the susceptibility to these unrelenting types of depression. People who have low self-esteem, are not optimistic, and quick overwhelmed by stress, are prone to depression (Grohol, 2006). More than likely, these psychological factors are completely accountable for different forms of mild and moderate depression, particularly reactive depression.Reactive depression is generally diagnosed as an adjustment disorder during treatment. loving learning factors also demonstrate why psychological complications appear to occur more regularly in family members, throughout generations. For example, if a child is raised in a pessimistic household, in which discouragement is shit and encouragement is not, the child will establish a vulnerability to depression as well. ( Psychologyinfo. com. , et al). Recently, researchers have found that physical changes in the body can be paired with psychological changes as well.Medical ailments such as a cancer, Parkinsons disease, stroke, heart attack, and hormonal disorders can bring about a depressive illness. This can guide in the ill person to feel unconcerned with their health and be unwilling to further bursting charge for their physical needs. In addition, any stressful change, financial problems, relationship problems, or serious loss can activate a depressive affair (Grohol, 2006). Depression is a serious growing problem in the United States.Modern science and research is slowly helping to further decipher the disorder to help slow its growth. everywhere 9. 2 million Americans have major or clinical depression. At an economical standpoint, depression is a major problem and needs to be accounted for, with an estimated $30. 4 billion spent annually on depression related treatments, medication, and dia gnostics. Surprisingly, the World Health Organization estimates by the year 2020, depression will be the descend two cause of, lost years of healthy living, worldwide.As our populations keeps rising and our economy worsening, we can only want that Psychologists help diminish the illness of depression. References Cause of Depression different causes of depression. Depression Treatment, Medication, Help, Symptoms Anxiety Attacks Depression Test, Types, attention deficit disorder Causes. Web. 14 Nov. 2009. . Causes of depression. Psychology Information Online. Web. 14 Nov. 2009. . Grohol, John M. The Causes of Depression Psych Central. Psych Central Trusted information in mental health and psychology. 6 Dec. 2006. Web. 14 Nov. 2009. .

Friday, May 17, 2019

Bayesian Inference

Biostatistics (2010), 11, 3, pp. 397412 inside10. 1093/biostatistics/kxp053 Advance Access publication on December 4, 2009 Bayesian inference for generalized additive obscure feignings YOUYI FONG Downloaded from http//biostatistics. oxfordjournals. org/ at Cornell University program library on April 20, 2013 Department of Biostatistics, University of Washington, Seattle, WA 98112, USA ? HAVARD grieve Department of Mathematical Sciences, The Norwegian University for Science and Technology, N-7491 Trondheim, Norway JON WAKEFIELD? Departments of Statistics and Biostatistics, University of Washington, Seattle, WA 98112, USA emailprotected ashington. edu S UMMARY generalised linear obscure mystifys (GLMMs) continue to grow in popularity due to their ability to straight acknowledge multiple levels of dependency and dumbfound different entropy types. For smooth sample sizes especi on the wholey, likelihood-based inference smoke be unreliable with unevenness components being par ticularly difficult to estimate. A Bayesian arise is benevolent however has been hampered by the lack of a fast writ of execution, and the difficulty in specifying previous distrisolelyions with variance components again being particularly problematic.Here, we briefly review previous glide pathes to computation in Bayesian implementations of GLMMs and illust aim in detail, the social occasion of integrated nested Laplace approximations in this context. We consider a sum of examples, conservatively specifying antecedent statistical distributions on cogitateingful quantities in separately slip. The examples cover a wide range of selective information types including those requiring smoothing over time and a relation entirelyy complicated spline specimen for which we witness our earlier judicial admission in terms of the implied degrees of emancipation.We conclude that Bayesian inference is now practically feasible for GLMMs and provides an attractive selection to likelihood-based approaches such as penalized quasi-likelihood. As with likelihood-based approaches, great c be is postulate in the epitome of clustered binary star data since approximation strategies may be less(prenominal) accurate for such data. Keywords Integrated nested Laplace approximations Longitudinal data Penalized quasi-likelihood front spec Spline gets. 1.I NTRODUCTION Generalized linear interracial models (GLMMs) combine a generalized linear model with normal ergodic effects on the linear predictor shell, to apportion a bountiful family of models that mother been dropd in a wide variety of applications ( cypher, e. g. Diggle and otherwises, 2002 Verbeke and Molenberghs, 2000, 2005 McCulloch and others, 2008). This flexibility comes at a price, however, in terms of analytical tractability, which has a ? To whom concord should be addressed. c The Author 2009. Published by Oxford University Press. all in all rights reserved. For permissions, please e-mail j ournals. emailprotected rg. 398 Y. F ONG AND OTHERS number of implications including computational complexity, and an unknown degree to which inference is parasitical on modeling assumptions. Likelihood-based inference may be carried out relatively easily within many softw ar system platforms (except perhaps for binary responses), but inference is dependent on asymptotic sampling distributions of estimators, with few guidelines usable as to when such theory will produce accurate inference. A Bayesian approach is attractive, but requires the specification of forward distributions which is not straightforward, in particular for variance components.Computation is also an issue since the usual implementation is via Markov chain Monte Carlo (MCMC), which carries a freehanded computational overhead. The seminal article of Breslow and Clayton (1993) helped to popularize GLMMs and placed an fury on likelihood-based inference via penalized quasi-likelihood (PQL). It is the aim of this article to describe, through a series of examples (including all of those considered in Breslow and Clayton, 1993), how Bayesian inference may be performed with computation via a fast implementation and with guidance on prior specification. The structure of this article is as follows.In Section 2, we define notation for the GLMM, and in Section 3, we describe the integrated nested Laplace approximation (INLA) that has recently been proposed as a computationally convenient alternative to MCMC. Section 4 gives a number of prescriptions for prior specification. Three examples are considered in Section 5 (with additional examples being report in the supplementary material lendable at Biostatistics online, along with a cloak study that reports the cognitive operation of INLA in the binary response situation). We conclude the account with a discussion in Section 6. 2.T HE G ENERALIZED LINEAR MIXED MODEL GLMMs extend the generalized linear model, as proposed by Nelder and Wedderburn ( 1972) and comprehensively exposit in McCullagh and Nelder (1989), by adding normally distributed random effects on the linear predictor scale. recollect Yi j is of exponential function family form Yi j ? i j , ? 1 ? p(), whither p() is a member of the exponential family, that is, p(yi j ? i j , ? 1 ) = exp yi j ? i j ? b(? i j ) + c(yi j , ? 1 ) , a(? 1 ) Downloaded from http//biostatistics. oxfordjournals. org/ at Cornell University Library on April 20, 2013 for i = 1, . . . , m building blocks (clusters) and j = 1, . . , n i , taxments per unit and where ? i j is the (scalar) ? crapperonical parameter. allow ? i j = EYi j ? , b i , ? 1 = b (? i j ) with g(? i j ) = ? i j = x i j ? + z i j b i , where g() is a monotonic link function, x i j is 1 ? p, and z i j is 1 ? q, with ? a p ? 1 vector of fixed ? Q effects and b i a q ? 1 vector of random effects, thus ? i j = ? i j (? , b i ). Assume b i Q ? N (0, Q ? 1 ), where ? the precision matrix Q = Q (? 2 ) depends on paramete rs ? 2 . For some plectrums of model, the matrix Q is singular examples entangle random passing game models (as considered in Section 5. ) and intrinsic qualified ? autoregressive models. We further pay that ? is assigned a normal prior distribution. Let ? = (? , b ) denote the G ? 1 vector of parameters assigned Gaussian priors. We also require priors for ? 1 (if not a constant) and for ? 2 . Let ? = (? 1 , ? 2 ) be the variance components for which non-Gaussian priors are ? assigned, with V = dim(? ). 3. I NTEGRATED NESTED L APLACE APPROXIMATION Before the MCMC revolution, on that point were few examples of the applications of Bayesian GLMMs since, outside of the linear obscure model, the models are analytically intractable.Kass and Steffey (1989) describe the physical exertion of Laplace approximations in Bayesian class-conscious models, while Skene and Wakefield Bayesian GLMMs 399 (1990) social occasiond numerical integration in the context of a binary GLMM. The use of MCMC for GLMMs is particularly appealing since the conditional independencies of the model may be exploited when the required conditional distributions are calculated. Zeger and Karim (1991) described approximate Gibbs sampling for GLMMs, with non example conditional distributions being approximated by normal distributions. more(prenominal) general cityHastings algorithms are straightforward to construct (see, e. g. Clayton, 1996 Gamerman, 1997). The winBUGS (Spiegelhalter, Thomas, and Best, 1998) software example manuals contain many GLMM examples. there are now a variety of additional software platforms for fitting GLMMs via MCMC including JAGS (Plummer, 2009) and BayesX (Fahrmeir and others, 2004). A large practical curb to data compend victimization MCMC is the large computational burden. For this reason, we now briefly review the INLA computational approach upon which we concentrate.The order combines Laplace approximations and numerical integration in a very efficient ma nner (see Rue and others, 2009, for a more than extensive treatment). For the GLMM described in Section 2, the back(prenominal) is given by m Downloaded from http//biostatistics. oxfordjournals. org/ at Cornell University Library on April 20, 2013 ? y ? ? ? ?(? , ? y ) ? ?(? ? )? (? ) i=1 y ? p(y i ? , ? ) m i=1 1 ? ? Q ? ? b ? ?(? )? (? )Q (? 2 )1/2 exp ? b T Q (? 2 )b + 2 y ? log p(y i ? , ? 1 ) , where y i = (yi1 , . . . , yin i ) is the vector of observations on unit/cluster i.We attentiveness to become the posterior y y peripherals ? (? g y ), g = 1, . . . , G, and ? (? v y ), v = 1, . . . , V . The number of variance components, V , should not be too large for accurate inference (since these components are integrated out via Cartesian increase numerical integration, which does not scale well with dimension). We print y ? (? g y ) = which may be tryd via the approximation y ? (? g y ) = K ? ? y ? ?(? g ? , y ) ? ?(? y )d? , ? ? y ? ?(? g ? , y ) ? ? (? y )d? ? y ? ? (? g ? k , y ) ? ? (? k y ) ? k, ? (3. 1) k=1 here Laplace (or other related analytical approximations) are applied to carry out the integrations required ? ? for evaluation of ? (? g ? , y ). To produce the grid of points ? k , k = 1, . . . , K over which numerical inte? y gration is performed, the mode of ? (? y ) is located, and the Hessian is approximated, from which the grid is created and exploited in (3. 1). The output of INLA consists of posterior marginal distributions, which can be summarized via means, variances, and quantiles. significantly for model proportion, the normaly izing constant p(y ) is calculated.The evaluation of this quantity is not straightforward employ MCMC (DiCiccio and others, 1997 Meng and Wong, 1996). The deviance information measuring (Spiegelhalter, Best, and others, 1998) is popular as a model selection tool, but in random-effects models, the implicit approximation in its use is valid only when the rough-and-ready number of parameters is much sm aller than the number of self-supporting observations (see Plummer, 2008). four hundred Y. F ONG AND OTHERS 4. P RIOR DISTRIBUTIONS 4. 1 Fixed effects Recall that we embrace ? is normally distributed. Often there will be sufficient information in the data for ? o be well estimated with a normal prior with a large variance (of course there will be good deal under which we would like to specify more informative priors, e. g. when there are many check covariates). The use of an improper prior for ? will lots lead to a proper posterior though make do should be taken. For example, Wakefield (2007) shows that a Poisson likelihood with a linear link can lead to an improper posterior if an improper prior is used. Hobert and Casella (1996) discuss the use of improper priors in linear mixed effects models.If we invite to use informative priors, we may specify independent normal priors with the parameters for from each one component being obtained via specification of 2 quantiles with associated probabilities. For logistic and log-linear models, these quantiles may be given on the exponentiated scale since these are more interpretable (as the odds ratio and rate ratio, observeively). If ? 1 and ? 2 are the quantiles on the exponentiated scale and p1 and p2 are the associated probabilities, because the parameters of the normal prior are given by ? = ? = z 2 log(? 1 ) ? z 1 log(? 2 ) , z2 ? 1 Downloaded from http//biostatistics. oxfordjournals. org/ at Cornell University Library on April 20, 2013 log(? 2 ) ? log(? 1 ) , z2 ? z1 where z 1 and z 2 are the p1 and p2 quantiles of a standard normal random variable. For example, in an epidemiological context, we may wish to specify a prior on a relative risk parameter, exp(? 1 ), which has a median(prenominal) of 1 and a 95% point of 3 (if we think it is unlikely that the relative risk associated with a unit increase in exposure exceeds 3). These specifications lead to ? 1 ? N (0, 0. 6682 ). 4. 2 Variance componentsW e arrive by describing an approach for choosing a prior for a single random effect, based on Wakefield (2009). The staple fiber idea is to specify a range for the more interpretable marginal distribution of bi and use this to drive specification of prior parameters. We state a trivial lemma upon which prior specification is based, but first define some notation. We write ? ? Ga(a1 , a2 ) for the gamma distribution with un? normalized density ? a1 ? 1 exp(? a2 ? ). For q-dimensional x , we write x ? Tq (? , , d) for the Students x x t distribution with unnormalized density 1 + (x ? ? )T ? 1 (x ? )/d? (d+q)/2 . This distribution has location ? , scale matrix , and degrees of freedom d. L EMMA 1 Let b? ? N (0, ? ?1 ) and ? ? Ga(a1 , a2 ). Integration over ? gives the marginal distribution of b as T1 (0, a2 /a1 , 2a1 ). To decide upon a prior, we give a range for a generic random effect b and specify the degrees of freev d dom, d, and therefore solve for a1 and a2 . For the range (? R, R), we use the relationship t1? (1? q)/2 a2 /a1 = d R, where tq is the 100 ? qth quantile of a Student t random variable with d degrees of freedom, to give d a1 = d/2 and a2 = R 2 d/2(t1? (1? q)/2 )2 .In the linear mixed effects model, b is directly interpretable, while for binomial or Poisson models, it is more appropriate to think in terms of the marginal distribution of exp(b), the residual odds and rate ratio, respectively, and this distribution is log Students t. For example, if we choose d = 1 (to give a Cauchy marginal) and a 95% range of 0. 1, 10, we take R = log 10 and obtain a = 0. 5 and b = 0. 0164. Bayesian GLMMs 401 ?1 Another convenient choice is d = 2 to give the exponential distribution with mean a2 for ? ?2 . This leads to closed-form expressions for the more interpretable quantiles of ? o that, for example, if we 2 specify the median for ? as ? m , we obtain a2 = ? m log 2. Unfortunately, the use of Ga( , ) priors has become popular as a prior for ? ?2 in a GLM M context, arising from their use in the winBUGS examples manual. As has been pointed out many times (e. g. Kelsall and Wakefield, 1999 Gelman, 2006 Crainiceanu and others, 2008), this choice places the majority of the prior mass away from zero and leads to a marginal prior for the random effects which is Students t with 2 degrees of freedom (so that the tails are much heavier than even a Cauchy) and difficult to rationalise in any practical facilityting.We now specify another trivial lemma, but first run aground notation for the Wishart distribution. For the q ? q nonsingular matrix z , we write z ? Wishartq (r, S ) for the Wishart distribution with unnormalized Downloaded from http//biostatistics. oxfordjournals. org/ at Cornell University Library on April 20, 2013 Q Lemma Let b = (b1 , . . . , bq ), with b Q ? iid Nq (0, Q ? 1 ), Q ? Wishartq (r, S ). Integration over Q b as Tq (0, (r ? q + 1)S ? 1 , r ? q + 1). S gives the marginal distribution of The margins of a multivariat e Students t are t also, which allows r and S to be chosen as in the univariate case.Specifically, the kth element of a generic random effect, bk , follows a univariate Student t distribution with location 0, scale S kk /(r ? q + 1), and degrees of freedom d = r ? q + 1, where S kk d is element (k, k) of the inverse of S . We obtain r = d + q ? 1 and S kk = (t1? (1? q)/2 )2 /(d R 2 ). If a priori b are correlated we may specify S jk = 0 for j = k and we accept no reason to accept that elements of S kk = 1/Skk , to recover the univariate specification, recognizing that with q = 1, the univariate Wishart has parameters a1 = r/2 and a2 = 1/(2S).If we believe that elements of b are dependent and so we may specify the correlations and solve for the off-diagonal elements of S . To ensure propriety of the posterior, proper priors are required for Zeger and Karim (1991) use an improper prior for , so that the posterior is improper also. 4. 3 Effective degrees of freedom variance compone nts prior z z z z density z (r ? q? 1)/2 exp ? 1 tr(z S ? 1 ) . This distribution has Ez = r S and Ez ? 1 = S ? 1 /(r ? q ? 1), 2 and we require r q ? 1 for a proper distribution.In Section 5. 3, we describe the GLMM arrayation of a spline model. A generic linear spline model is given by K yi = x i ? + k=1 z ik bk + i , where x i is a p ? 1 vector of covariates with p ? 1 associated fixed effects ? , z ik denote the spline 2 base of operations, bk ? iid N (0, ? b ), and i ? iid N (0, ? 2 ), with bk and i independent. Specification of a prior for 2 is not straightforward, but may be of great importance since it contributes to determining the amount ? b of smoothing that is applied. R upper bertht and others (2003, p. 77) raise concerns, most the instability of reflex(a) smoothing parameter selection even for single predictor models, and continue, Although we are attracted by the automatic nature of the mixed model-REML approach to fitting additive models, we discour time blind acceptance of whatever outcome it provides and recommend looking at other amounts of smoothing. While we would echo this general advice, we believe that a Bayesian mixed model approach, with carefully chosen priors, can increase the stability of the mixed model representation. There has been 2 some discussion of choice of prior for ? in a spline context (Crainiceanu and others, 2005, 2008). More general discussion can be found in Natarajan and Kass (2000) and Gelman (2006). In practice (e. g. Hastie and Tibshirani, 1990), smoothers are often applied with a fixed degrees of freedom. We extend this rationale by examining the prior degrees of freedom that is implied by the choice 402 Y. F ONG AND OTHERS ?2 ? b ? Ga(a1 , a2 ). For the general linear mixed model y = x ? + zb + , we strike x z where C = x z is n ? ( p + K ) and C y = x ? + z b = C (C T C + 0 p? p 0K ? p )? 1 C T y , = 0 p? K 2 cov(b )? 1 b ? )? 1 C T C , Downloaded from http//biostatistics. xfordjournals. org/ at Corn ell University Library on April 20, 2013 (see, e. g. Ruppert and others, 2003, Section 8. 3). The total degrees of freedom associated with the model is C df = tr(C T C + which may be decomposed into the degrees of freedom associated with ? and b , and extends easily to situations in which we nominate additional random effects, beyond those associated with the spline basis (such an example is considered in Section 5. 3). In each of these situations, the degrees of freedom associated C with the respective parameter is obtained by summing the appropriate diagonal elements of (C T C + )? C T C . Specifically, if we bedevil j = 1, . . . , d sets of random-effect parameters (there are d = 2 in the model considered in Section 5. 3) then let E j be the ( p + K ) ? ( p + K ) diagonal matrix with ones in the diagonal positions corresponding to set j. Then the degrees of freedom associated with this set is E C df j = trE j (C T C + )? 1 C T C . Note that the effective degrees of freedom chan ges as a function of K , as expected. To evaluate , ? 2 is required. If we specify a proper prior for ? 2 , then we may specify the 2 2 joint prior as ? (? b , ? 2 ) = ? (? 2 )? (? b ? 2 ).Often, however, we don the improper prior ? (? 2 ) ? 1/? 2 since the data provide sufficient information with respect to ? 2 . Hence, we have found the substitution of an estimate for ? 2 (for example, from the fitting of a spline model in a likelihood implementation) to be a practically reasonable strategy. As a fair nonspline demonstration of the derived effective degrees of freedom, consider a 1-way analysis of variance model Yi j = ? 0 + bi + i j 2 with bi ? iid N (0, ? b ), i j ? iid N (0, ? 2 ) for i = 1, . . . , m = 10 groups and j = 1, . . . , n = 5 observa? 2 tions per group. For illustration, we assume ? ? Ga(0. 5, 0. 005). Figure 1 displays the prior distribution for ? , the implied prior distribution on the effective degrees of freedom, and the bivariate plot of these quantities. For clarity of plotting, we pretermit a small number of points beyond ? 2. 5 (4% of points). In panel (c), we have placed stippled horizontal lines at effective degrees of freedom equal to 1 (complete smoothing) and 10 (no smoothing). From panel (b), we conclude that here the prior choice favors quite strong smoothing. This may be contrasted with the gamma prior with parameters (0. 001, 0. 001), which, in this example, gives reater than 99% of the prior mass on an effective degrees of freedom greater than 9. 9, again showing the inappropriateness of this prior. It is appealing to extend the above argument to nonlinear models but unfortunately this is not straightforward. For a nonlinear model, the degrees of freedom may be approximated by C df = tr(C T W C + where W = diag Vi? 1 d? i dh 2 )? 1 C T W C , and h = g ? 1 denotes the inverse link function. Unfortunately, this quantity depends on ? and b , which means that in practice, we would have to use prior estimates for all of the p arameters, which may not be practically possible.Fitting the model using likelihood and then substituting in estimates for ? and b seems philosophically dubious. Bayesian GLMMs 403 Downloaded from http//biostatistics. oxfordjournals. org/ at Cornell University Library on April 20, 2013 Fig. 1. Gamma prior for ? ?2 with parameters 0. 5 and 0. 005, (a) implied prior for ? , (b) implied prior for the effective degrees of freedom, and (c) effective degrees of freedom versus ? . 4. 4 Random walk models Conditionally represented smoothing models are popular for random effects in both temporal and spacial applications (see, e. g. Besag and others, 1995 Rue and Held, 2005).For illustration, consider models of the form ? (m? r ) Q u 2 exp ? p(u ? u ) = (2? )? (m? r )/2 Q 1/2 ? u 1 T u Qu , 2 2? u (4. 1) 404 Y. F ONG AND OTHERS where u = (u 1 , . . . , u m ) is the collection of random effects, Q is a (scaled) precision matrix of association Q m ? r , whose form is determined by the applica tion at hand, and Q is a generalized determinant which is the product over the m ? r nonzero eigenvalues of Q . Picking a prior for ? u is not straightforward because ? u has an interpretation as the conditional standard difference, where the elements that are conditioned upon depends on the application.We may simulate realizations from (4. 1) to examine candidate prior distributions. Due to the rank lack, (4. 1) does not define a probability density, and so we cannot directly simulate from this prior. However, Rue and Held (2005) give an algorithm for generating samples from (4. 1) 1. Simulate z j ? N (0, 1 ), for j = m ? r + 1, . . . , m, where ? j are the eigenvalues of Q (there are j m ? r nonzero eigenvalues as Q has rank m ? r ). 2. Return u = z m? r +1 e n? r +1 + z 3 e 3 + + z n e m = E z , where e j are the corresponding eigenvectors of Q , E is the m ? (m ? ) matrix with these eigenvectors as columns, and z is the (m ? r ) ? 1 vector containing z j , j = m ? r + 1, . . . , m. The disguise algorithm is conditioned so that samples are zero in the null-space of Q if u is a sample and the null-space is spanned by v 1 and v 2 , then u T v 1 = u T v 2 = 0. For example, suppose Q 1 = 0 so that the null-space is spanned by 1, and the rank deficiency is 1. Then Q is improper since the eigenvalue corresponding to 1 is zero, and samples u produced by the algorithm are such that u T 1 = 0. In Section 5. 2, we use this algorithm to evaluate different priors via simulation.It is also efficacious to note that if we wish to compute the marginal variances only, simulation is not required, as they are available as the diagonal elements of the matrix j 1 e j e T . j j 5. E XAMPLES Here, we report 3 examples, with 4 others described in the supplementary material available at Biostatistics online. Together these cover all the examples in Breslow and Clayton (1993), along with an additional spline example. In the first example, results using the INLA numerical/ analytical approximation described in Section 3 were compared with MCMC as implemented in the JAGS software (Plummer, 2009) and found to be accurate.For the models considered in the second and third examples, the approximation was compared with the MCMC implementation contained in the INLA software. 5. 1 Longitudinal data We consider the much analyzed epilepsy data set of Thall and Vail (1990). These data concern the number ? of seizures, Yi j for patient i on visit j, with Yi j ? , b i ? ind Poisson(? i j ), i = 1, . . . , 59, j = 1, . . . , 4. We concentrate on the 3 random-effects models fitted by Breslow and Clayton (1993) log ? i j = x i j ? + b1i , (5. 1) (5. 2) (5. 3) Downloaded from http//biostatistics. oxfordjournals. rg/ at Cornell University Library on April 20, 2013 log ? i j = x i j ? + b1i + b2i V j /10, log ? i j = x i j ? + b1i + b0i j , where x i j is a 1 ? 6 vector containing a 1 (representing the intercept), an forefinger for baseline measurement, a treatment ind icator, the baseline by treatment interaction, which is the parameter of interest, age, and either an indicator of the 4th visit (models (5. 1) and (5. 2) and denoted V4 ) or visit number coded ? 3, ? 1, +1, +3 (model (5. 3) and denoted V j /10) and ? is the associated fixed effect. solely 3 models 2 include patient-specific random effects b1i ? N 0, ? , while in model (5. 2), we introduce independent 2 ). Model (5. 3) includes random effects on the slope associated with measurement errors, b0i j ? N (0, ? 0 Bayesian GLMMs 405 put back 1. PQL and INLA summaries for the epilepsy data unsettled Base Trt Base ? Trt Age V4 or V/10 ? 0 ? 1 ? 2 Model (5. 1) PQL 0. 87 0. 14 ? 0. 91 0. 41 0. 33 0. 21 0. 47 0. 36 ? 0. 16 0. 05 0. 53 0. 06 INLA 0. 88 0. 15 ? 0. 94 0. 44 0. 34 0. 22 0. 47 0. 38 ? 0. 16 0. 05 0. 56 0. 08 Model (5. 2) PQL 0. 86 0. 13 ? 0. 93 0. 40 0. 34 0. 21 0. 47 0. 35 ? 0. 10 0. 09 0. 36 0. 04 0. 48 0. 06 INLA 0. 8 0. 15 ? 0. 96 0. 44 0. 35 0. 2 3 0. 48 0. 39 ? 0. 10 0. 09 0. 41 0. 04 0. 53 0. 07 Model (5. 3) PQL 0. 87 0. 14 ? 0. 91 0. 41 0. 33 0. 21 0. 46 0. 36 ? 0. 26 0. 16 0. 52 0. 06 0. 74 0. 16 INLA 0. 88 0. 14 ? 0. 94 0. 44 0. 34 0. 22 0. 47 0. 38 ? 0. 27 0. 16 0. 56 0. 06 0. 70 0. 14 Downloaded from http//biostatistics. oxfordjournals. org/ at Cornell University Library on April 20, 2013 visit, b2i with b1i b2i ? N (0, Q ? 1 ). (5. 4) We assume Q ? Wishart(r, S ) with S = S11 S12 . For prior specification, we begin with the bivariate S21 S22 model and assume that S is diagonal.We assume the upper 95% point of the priors for exp(b1i ) and exp(b2i ) are 5 and 4, respectively, and that the marginal distributions are t with 4 degrees of freedom. Following the map outlined in Section 4. 2, we obtain r = 5 and S = diag(0. 439, 0. 591). We take ? 2 the prior for ? 1 in model (5. 1) to be Ga(a1 , a2 ) with a1 = (r ? 1)/2 = 2 and a2 = 1/2S11 = 1. 140 (so that this prior coincides with the marginal prior obtained from the bivariate specification). In model (5. 2), ? 2 ? 2 we assume b1i and b0i j are independent, and that ? 0 follows the same prior as ? , that is, Ga(2, 1. 140). We assume a flat prior on the intercept, and assume that the rate ratios, exp(? j ), j = 1, . . . , 5, lie between 0. 1 and 10 with probability 0. 95 which gives, using the approach described in Section 4. 1, a normal prior with mean 0 and variance 1. 172 . Table 1 gives PQL and INLA summaries for models (5. 15. 3). There are some differences between the PQL and Bayesian analyses, with slightly larger standard differences under the last mentioned, which probably reflects that with m = 59 clusters, a little accuracy is lost when using asymptotic inference.There are some differences in the point estimates which is at least partly due to the nonflat priors usedthe priors have relatively large variances, but here the data are not so abundant so there is sensitivity to the prior. Reassuringly under all 3 models inference for the baseline-treatment interaction of interest is intimately y identical and suggests no significant treatment effect. We may compare models using log p(y ) for 3 models, we obtain values of ? 674. 8, ? 638. 9, and ? 665. 5, so that the second model is strongly preferred. 5. Smoothing of birth cohort effects in an age-cohort model We analyze data from Breslow and Day (1975) on breast cancer rates in Iceland. Let Y jk be the number of breast cancer of cases in age group j (2024,. . . , 8084) and birth cohort k (18401849,. . . ,19401949) with j = 1, . . . , J = 13 and k = 1, . . . , K = 11. Following Breslow and Clayton (1993), we assume Y jk ? jk ? ind Poisson(? jk ) with log ? jk = log n jk + ? j + ? k + vk + u k (5. 5) and where n jk is the person-years denominator, exp(? j ), j = 1, . . . , J , represent fixed effects for age relative risks, exp(? is the relative risk associated with a one group increase in cohort group, vk ? iid 406 Y. F ONG AND OTHERS 2 N (0, ? v ) represent unstructured random effects associated with cohort k, with smooth cohort terms u k by-line a second-order random-effects model with Eu k u i i k = 2u k? 1 ? u k? 2 and Var(u k u i 2 i k) = ? u . This latter model is to allow the rates to change smoothly with cohort. An equivalent representation of this model is, for 2 k K ? 1, 1 Eu k u l l = k = (4u k? 1 + 4u k+1 ? u k? 2 ? u k+2 ), 6 Var(u k u l l = k) = 2 ? . 6 Downloaded from http//biostatistics. oxfordjournals. org/ at Cornell University Library on April 20, 2013 The rank of Q in the (4. 1) representation of this model is K ? 2 reflecting that both the overall level and the overall trend are aliased (hence the appearance of ? in (5. 5)). The term exp(vk ) reflects the unstructured residual relative risk and, following the argument in Section 4. 2, we specify that this quantity should lie in 0. 5, 2. 0 with probability 0. 95, with a marginal log Cauchy ? 2 distribution, to obtain the gamma prior ? v ? Ga(0. 5 , 0. 00149).The term exp(u k ) reflects the smooth component of the residual relative risk, and the specification of a 2 prior for the associated variance component ? u is more difficult, given its conditional interpretation. Using the algorithm described in Section 4. 2, we examined simulations of u for different choices of gamma ? 2 hyperparameters and obdurate on the choice ? u ? Ga(0. 5, 0. 001) Figure 2 shows 10 realizations from the prior. The rationale here is to examine realizations to see if they conform to our prior expectations and in particular exhibit the required amount of smoothing.All but one of the realizations vary smoothly across the 11 cohorts, as is desirable. Due to the tail of the gamma distribution, we will always have some extreme realizations. The INLA results, summarized in graphical form, are presented in Figure 2(b), alongside likelihood fits in which the birth cohort effect is incorporated as a linear term and as a factor. We see that the smoothing mod el provides a smooth fit in birth cohort, as we would hope. 5. 3 B-Spline nonparametric regression We demonstrate the use of INLA for nonparametric smoothing using OSullivan splines, which are based on a B-spline basis.We illustrate using data from Bachrach and others (1999) that concerns longitudinal measurements of spinal bone mineral density (SBMD) on 230 egg-producing(prenominal) subjects aged between 8 and 27, and of 1 of 4 ethnic groups Asian, Black, Hispanic, and White. Let yi j denote the SBMD measure for subject i at occasion j, for i = 1, . . . , 230 and j = 1, . . . , n i with n i being between 1 and 4. Figure 3 shows these data, with the gray lines indicating measurements on the same woman. We assume the model K Yi j = x i ? 1 + agei j ? 2 + k=1 z i jk b1k + b2i + ij, where x i is a 1 ? vector containing an indicator for the ethnicity of one-on-one i, with ? 1 the associated 4 ? 1 vector of fixed effects, z i jk is the kth basis associated with age, with associated par ameter b1k ? 2 2 N (0, ? 1 ), and b2i ? N (0, ? 2 ) are woman-specific random effects, finally, i j ? iid N (0, ? 2 ). All random terms are assumed independent. Note that the spline model is assumed common to all ethnic groups and all women, though it would be straightforward to allow a different spline for each ethnicity. Writing this model in the form y = x ? + z 1b1 + z 2b 2 + = C ? + . Bayesian GLMMs 407Downloaded from http//biostatistics. oxfordjournals. org/ at Cornell University Library on April 20, 2013 Fig. 2. (a) Ten realizations (on the relative risk scale) from the random effects second-order random walk model in which the prior on the random-effects precision is Ga(0. 5,0. 001), (b) summaries of fitted models the solid line corresponds to a log-linear model in birth cohort, the circles to birth cohort as a factor, and + to the Bayesian smoothing model. we use the method described in Section 4. 3 to examine the effective number of parameters implied by the ? 2 ? 2 priors ? 1 ? Ga(a1 , a2 ) and ? 2 ? Ga(a3 , a4 ).To fit the model, we first use the R code provided in Wand and Ormerod (2008) to construct the basis functions, which are then input to the INLA program. Running the REML version of the model, we obtain 2 ? = 0. 033 which we use to evaluate the effective degrees of freedoms associated with priors for ? 1 and 2 . We assume the usual improper prior, ? (? 2 ) ? 1/? 2 for ? 2 . by and by some experimentation, we settled ? 2 408 Y. F ONG AND OTHERS Downloaded from http//biostatistics. oxfordjournals. org/ at Cornell University Library on April 20, 2013 Fig. 3. SBMD versus age by ethnicity. Measurements on the same woman are joined with gray lines.The solid curve corresponds to the fitted spline and the dashed lines to the individual fits. ?2 2 on the prior ? 1 ? Ga(0. 5, 5 ? 10? 6 ). For ? 2 , we wished to have a 90% interval for b2i of 0. 3 which, ? 2 with 1 degree of freedom for the marginal distribution, leads to ? 2 ? Ga(0. 5, 0. 00113). F igure 4 shows the priors for ? 1 and ? 2 , along with the implied effective degrees of freedom under the assumed priors. For the spline component, the 90% prior interval for the effective degrees of freedom is 2. 4,10. Table 2 compares estimates from REML and INLA implementations of the model, and we see close correspondence between the 2.Figure 4 also shows the posterior medians for ? 1 and ? 2 and for the 2 effective degrees of freedom. For the spline and random effects these correspond to 8 and 214, respectively. The latter figure shows that there is considerable variability between the 230 women here. This is substantiate in Figure 3 where we observe large vertical differences between the profiles. This figure also shows the fitted spline, which appears to imitate the trend in the data well. 5. 4 Timings For the 3 models in the longitudinal data example, INLA takes 1 to 2 s to run, using a single CPU.To get estimates with similar precision with MCMC, we ran JAGS for 100 000 it erations, which took 4 to 6 min. For the model in the temporal smoothing example, INLA takes 45 s to run, using 1 CPU. Part of the INLA procedure can be executed in a parallel manner. If there are 2 CPUs available, as is the case with todays prevalent INTEL Core 2 Duo processors, INLA only takes 27 s to run. It is not currently possible to implement this model in JAGS. We ran the MCMC utility built into the INLA software for 3. 6 meg iterations, to obtain estimates of comparable accuracy, which took 15 h.For the model in the B-spline nonparametric regression example, INLA took 5 s to run, using a single CPU. We ran the MCMC utility built into the INLA software for 2. 5 million iterations to obtain estimates of comparable accuracy, the analysis taking 40 h. Bayesian GLMMs 409 Downloaded from http//biostatistics. oxfordjournals. org/ at Cornell University Library on April 20, 2013 Fig. 4. Prior summaries (a) ? 1 , the standard deviation of the spline coefficients, (b) effective degre es of freedom associated with the prior for the spline coefficients, (c) effective degrees of freedom versus ? , (d) ? 2 , the standard deviation of the between-individual random effects, (e) effective degrees of freedom associated with the individual random effects, and (f) effective degrees of freedom versus ? 2 . The vertical dashed lines on panels (a), (b), (d), and (e) correspond to the posterior medians. Table 2. REML and INLA summaries for spinal bone data. Intercept corresponds to Asian group Variable Intercept Black Hispanic White Age ? 1 ? 2 ? REML 0. 560 0. 029 0. 106 0. 021 0. 013 0. 022 0. 026 0. 022 0. 021 0. 002 0. 018 0. 109 0. 033 INLA 0. 563 0. 031 0. 106 0. 021 0. 13 0. 022 0. 026 0. 022 0. 021 0. 002 0. 024 0. 006 0. 109 0. 006 0. 033 0. 002 Note For the entries marked with a standard errors were unavailable. 410 Y. F ONG AND OTHERS 6. D ISCUSSION In this paper, we have demonstrated the use of the INLA computational method for GLMMs. We have found t hat the approximation strategy employed by INLA is accurate in general, but less accurate for binomial data with small denominators. The supplementary material available at Biostatistics online contains an extensive simulation study, replicating that presented in Breslow and Clayton (1993).There are some suggestions in the discussion of Rue and others (2009) on how to construct an improved Gaussian approximation that does not use the mode and the curvature at the mode. It is likely that these suggestions will improve the results for binomial data with small denominators. There is an urgent need for diagnosis tools to flag when INLA is inaccurate. Conceptually, computation for nonlinear mixed effects models (Davidian and Giltinan, 1995 Pinheiro and Bates, 2000) can also be handled by INLA but this capability is not currently available. The website www. r-inla. rg contains all the data and R scripts to perform the analyses and simulations reported in the paper. The latest release of s oftware to implement INLA can also be found at this site. Recently, Breslow (2005) revisited PQL and concluded that, PQL still performs remarkably well in comparison with more elaborate procedures in many practical situations. We believe that INLA provides an attractive alternative to PQL for GLMMs, and we hope that this paper stimulates the greater use of Bayesian methods for this class. Downloaded from http//biostatistics. oxfordjournals. org/ at Cornell University Library on April 20, 2013S UPPLEMENTARY MATERIAL Supplementary material is available at http//biostatistics. oxfordjournals. org. ACKNOWLEDGMENT Conflict of Interest None declared. F UNDING National Institutes of Health (R01 CA095994) to J. W. Statistics for Innovation (sfi. nr. no) to H. R. R EFERENCES BACHRACH , L. K. , H ASTIE , T. , WANG , M. C. , NARASIMHAN , B. AND M ARCUS , R. (1999). Bone mineral acquisition in healthy Asian, Hispanic, Black and Caucasic youth. A longitudinal study. The diary of Clinical Endo crinology and Metabolism 84, 47024712. B ESAG , J. , G REEN , P. J. , H IGDON , D. AND M ENGERSEN , K. 1995). Bayesian computation and stochastic systems (with discussion). Statistical Science 10, 366. B RESLOW, N. E. (2005). Whither PQL? In Lin, D. and Heagerty, P. J. (editors), Proceedings of the Second Seattle Symposium. bare-assed York Springer, pp. 122. B RESLOW, N. E. AND C LAYTON , D. G. (1993). dear inference in generalized linear mixed models. daybook of the American Statistical connecter 88, 925. B RESLOW, N. E. AND DAY, N. E. (1975). Indirect standardization and multiplicative models for rates, with reference to the age adjustment of cancer incidence and relative frequency data. ledger of Chronic Diseases 28, 289301. C LAYTON , D. G. (1996). Generalized linear mixed models. In Gilks, W. R. , Richardson, S. and Spiegelhalter, D. J. (editors), Markov Chain Monte Carlo in Practice. capital of the United Kingdom Chapman and Hall, pp. 275301. Bayesian GLMMs 411 C RAINICEAN U , C. M. , D IGGLE , P. J. AND ROWLINGSON , B. (2008). Bayesian analysis for penalized spline regression using winBUGS. Journal of the American Statistical Association 102, 2137. C RAINICEANU , C. M. , RUPPERT, D. AND sceptre , M. P. (2005). Bayesian analysis for penalized spline regression using winBUGS. Journal of Statistical Software 14.DAVIDIAN , M. AND G ILTINAN , D. M. (1995). nonlinear Models for Repeated Measurement Data. London Chapman and Hall. D I C ICCIO , T. J. , K ASS , R. E. , R AFTERY, A. AND WASSERMAN , L. (1997). compute Bayes factors by combining simulation and asymptotic approximations. Journal of the American Statistical Association 92, 903915. Downloaded from http//biostatistics. oxfordjournals. org/ at Cornell University Library on April 20, 2013 D IGGLE , P. , H EAGERTY, P. , L IANG , K. -Y. Oxford Oxford University Press. AND Z EGER , S. (2002). Analysis of Longitudinal Data, second edition. FAHRMEIR , L. , K NEIB , T.AND L ANG , S. (2004). Penalized stru ctured additive regression for space-time data a Bayesian perspective. Statistica Sinica 14, 715745. G AMERMAN , D. (1997). ingest from the posterior distribution in generalized linear mixed models. Statistics and Computing 7, 5768. G ELMAN , A. (2006). Prior distributions for variance parameters in hierarchical models. Bayesian Analysis 1, 515534. H ASTIE , T. J. AND T IBSHIRANI , R. J. (1990). Generalized Additive Models. London Chapman and Hall. H OBERT, J. P. AND C ASELLA , G. (1996). The effect of improper priors on Gibbs sampling in hierarchical linear mixed models.Journal of the American Statistical Association 91, 14611473. K ASS , R. E. AND S TEFFEY, D. (1989). Approximate Bayesian inference in conditionally independent hierarchical models (parametric empirical Bayes models). Journal of the American Statistical Association 84, 717726. K ELSALL , J. E. AND WAKEFIELD , J. C. (1999). Discussion of Bayesian models for spatially correlated disease and exposure data by N. Best, I. Waller, A. Thomas, E. Conlon and R. Arnold. In Bernardo, J. M. , Berger, J. O. , Dawid, A. P. and Smith, A. F. M. (editors), sixth Valencia International Meeting on Bayesian Statistics. London Oxford University Press.M C C ULLAGH , P. AND N elderly , J. A. (1989). Generalized Linear Models, 2nd edition. London Chapman and Hall. M C C ULLOCH , C. E. , S EARLE , S. R. AND N EUHAUS , J. M. (2008). Generalized, Linear, and Mixed Models, 2nd edition. new-fashioned York John Wiley and Sons. M ENG , X. AND W ONG , W. (1996). Simulating ratios of normalizing constants via a simple identity. Statistical Sinica 6, 831860. NATARAJAN , R. AND K ASS , R. E. (2000). book of facts Bayesian methods for generalized linear mixed models. Journal of the American Statistical Association 95, 227237. N ELDER , J. AND W EDDERBURN , R. (1972). Generalized linear models.Journal of the Royal Statistical Society, Series A 135, 370384. P INHEIRO , J. C. AND BATES , D. M. (2000). Mixed-Effects Models in S and S-plus. New York Springer. P LUMMER , M. (2008). Penalized loss functions for Bayesian model comparison. Biostatistics 9, 523539. P LUMMER , M. (2009). Jags version 1. 0. 3 manual. Technical Report. RUE , H. AND H ELD , L. (2005). Gaussian Markov Random Fields Thoery and Application. Boca Raton Chapman and Hall/CRC. RUE , H. , M ARTINO , S. AND C HOPIN , N. (2009). Approximate Bayesian inference for latent Gaussian models using integrated nested laplace approximations (with discussion).Journal of the Royal Statistical Society, Series B 71, 319392. 412 RUPPERT, D. R. , WAND , M. P. University Press. AND Y. F ONG AND OTHERS C ARROLL , R. J. (2003). Semiparametric Regression. New York Cambridge S KENE , A. M. AND WAKEFIELD , J. C. (1990). Hierarchical models for multi-centre binary response studies. Statistics in music 9, 919929. S PIEGELHALTER , D. , B EST, N. , C ARLIN , B. AND VAN DER L INDE , A. (1998). Bayesian measures of model complexity and fit (with discussion). Journal of the Royal Statistical Society, Series B 64, 583639. S PIEGELHALTER , D. J. , T HOMAS , A.AND B EST, N. G. (1998). WinBUGS User Manual. Version 1. 1. 1. Cambridge. T HALL , P. F. AND VAIL , S. C. (1990). Some covariance models for longitudinal count data with overdispersion. Biometrics 46, 657671. V ERBEKE , G. V ERBEKE , G. AND AND Downloaded from http//biostatistics. oxfordjournals. org/ at Cornell University Library on April 20, 2013 M OLENBERGHS , G. (2000). Linear Mixed Models for Longitudinal Data. New York Springer. M OLENBERGHS , G. (2005). Models for Discrete Longitudinal Data. New York Springer. WAKEFIELD , J. C. (2007). Disease mapping and spatial regression with count data.Biostatistics 8, 158183. WAKEFIELD , J. C. (2009). Multi-level modelling, the ecologic fallacy, and hybrid study designs. International Journal of Epidemiology 38, 330336. WAND , M. P. AND O RMEROD , J. T. (2008). On semiparametric regression with OSullivan penalised splines. Australian and New Zeala nd Journal of Statistics 50, 179198. Z EGER , S. L. AND K ARIM , M. R. (1991). Generalized linear models with random effects a Gibbs sampling approach. Journal of the American Statistical Association 86, 7986. Received September 4, 2009 revised November 4, 2009 accepted for publication November 6, 2009