Tuesday, October 29, 2019

The Broadway show Chicago Personal Statement Example | Topics and Well Written Essays - 750 words

The Broadway show Chicago - Personal Statement Example The most exhilarating shows in Chicago's lively Downtown Theater District contributed significantly to my appreciation of the aesthetic experience of the mind and the stages of the Ford Center for the Performing Arts/Oriental Theatre, the Cadillac Palace Theatre, the Bank of America Theatre, the Auditorium Theatre and the Drury Lane Theatre Water Tower Place are some of the most notable stages that brought to my mind a really high opinion about stage show Chicago. The Addams Family and Jersey Boys are two of the most incredible shows in the Broadway show in Chicago. Significantly, the former is a splendid new show created by Jersey Boys authors Marshall Brickman and Rick Elice at the Oriental Theater, Ford Center, while the latter is a multi-award winning show. "The weird and wonderful family created by cartoonist Charles Addams comes to devilishly delightful life in a new Broadway Musical The Addams Family Jersey Boys, the mutli-award winning Broadway show about the rise to fame of Frankie Valli & the Four Seasons is breaking box office records at the Bank of America Theater in Chicago." (The Best Shows in Chicago) Therefore, my experience in the Broadway show in Chicago has given me an essential opportunity to understand and appreciate aesthetic value of the stage show in Chicago, in which I realized the importance of costume, dance, choreography, musical elements of the show. It is fundamental to note that the Broadway show in Chicago is an absolute beauty, incorporating dance, choreography, music, performances, etc. and the various stages in the show bring before the audience a memorable experience that no one ever forget all through his life. One of the main attractions of the shows here is the costumes used for various performances, dance programs, and musical shows. Significantly, Broadway costumes offer accuracy and professionalism to any performance staged in Chicago. The great wealth of theatrical costumes enhances the beauty of every show presented here, and I was particularly attracted to the theatrical costumes of the Pirates of Penzance and Phantom of the Opera. Another fundamental attraction of the Broadway show in Chicago is, undoubtedly, the pulse-racing revival of the musical 'Chicago' which also incorporates some of the sexiest and most sophisticated dancing on Broadway show. As Ben Brantley maintains, "this new incarnation, directed by Wa lter Bobbie and choreographed by Ann Reinking (who also stars), makes an exhilarating case both for 'Chicago' as a musical for the ages and for the essential legacy of Fosse, whose ghost has never been livelier than it is here." (Brantley) Therefore, the costumes, musical, dancing, and choreography in the Broadway show in Chicago attract a number of theatre-goers today. The Broadway show in Chicago has offered me a great opportunity to recognize my ability to appreciate aesthetic elements of every artistic form. The costumes of the show attracted me very much and the dancers and choreographers seemed amazing to me. Significantly, the stage show in Chicago helped me in realizing the excitement of Chicago tourism and every show I witnessed here will live in my loveliest memories all through my life. The striking revival of Chicago musical and dancing reminded me of the glorious days of the show.

Sunday, October 27, 2019

Recommendation For Cimb Group Finance Essay

Recommendation For Cimb Group Finance Essay Maybank Bhd is the largest financial services provider in Malaysia since its incorporation. It has been leading in the banking industry for several years. Maybank was founded by Khoo Teck Puat on 31 May 1960 and commenced operations on 12 September 1960. On 17 February 1962, Maybank was listed on the Bursa Malaysia. The Maybank Group today has over 46,000 employees serving more than 22 million customers globally. Maybank offers a full range of commercials, corporate and private services including commercial banking, Islamic banking, investment banking, insurance, stock broking, offshore banking, leasing and hire purchase, factoring, nominee services, trustee services, asset management, venture capital and Internet banking. Maybank has an international network that covering in 20 countries namely Cambodia, Vietnam, Uzbekistan, Indonesia, Bahrain, China, Papua New Guinea, Philippines and Pakistan of over 2200 braches. Maybank also enlarged its network to New York and London. Furthermore, Maybank was the first bank from Malaysia that success granted the right to establish a branch office in China. The groups key operating subsidiaries including Maybank Investment Bank Berhad, Kim Eng Holdings Ltd, Maybank Islamic Berhad, Etiqa, Bank Internasional Indonesia Tbk. In addition, the key overseas unit subsidiaries of Maybank include PT Bank international Tbk (BII), Maybank Philippines Inc., Maybank (PNG) Ltd in Papua New Guinea and Maybank International (L) in Labuan. (Maybank Overview) 1 Background of CIMB Group The creations of CIMB Group take more than 75 years since year 1924. It has merged few banks in Malaysia and finally forms CIMB Group until now. CIMB Bhd was listed on Bursa Malaysia in January 2003. And in year 2006, CIMB Group was launched as a Regional Universal Bank by the merger of Commerce International Merchant Bankers, Bumiputra-Commerce Bank and Southern Bank. Nowadays, CIMB Group is the second largest financial services provider in Malaysia. (History of CIMB Group) Headquartered in Kuala Lumpur, CIMB Group retail network of over 1100 branches are covering 18 countries in ASEAN with over 43000 employees. CIMB Groups main markets are Malaysia, Singapore, Thailand and Indonesia by across the following areas which are Wholesale Banking, Consumer Banking, Treasury Markets, Group Strategy Strategic Investments, and comprising Investment Banking and Corporate Banking. (Profile of CIMB Group) CIMB Group offers a full range of financial products and services, covering corporate and investment banking, consumer banking, treasury, insurance and assets management. CIMB Group have operates under several corporate entities including CIMB Bank, CIMB Investment Bank, CIMB Niaga, CIMB Islamic, CIMB Securities International and CIMB Thai. (Profile of CIMB Group) 2 Ratio of Maybank and CIMB Group May Bank CIMB Group Return on Equity Capital (ROE) 61.78% 15.17% Return on Assets (ROA) 1.12% 1.36% Net Interest Margin 1.74% 2.22% Net Non interest Margin 1.00% 1.32% Net Operating Margin 1.49% 3.55% EPS 61.4sen 54.2sen Earning Spread 2.17% 3.12% Return on equity capital (ROE) is the amount of net income  returned  as a percentage  of shareholders equity.  Return on equity capital  measures a corporations profitability  by revealing how much  profit a company generates  with the money shareholders have invested.  The ROE of CIMB group is 15.17% in 2011 and the ROE of May Bank is 61.78% in 2011. In this number shown that the CIMB Group have a greater ROE compare to May Bank. This shows that CIMB Group has more shareholders invested and it generate more profit than May Bank. As a result, shareholder able to receive more return from their money invested in CIMB Group. As the net profit increases, dividend pay to the shareholder will also increases since corporation had make a great profit for the year. CIMB Group had shown a good financial position of company based on the high return on common stock equity (ROE). They had spent wisely on their investment during the year. It is worth to invest as the higher stoc k price hold by the company. It had success to maximize shareholder wealth. 3Return on asset (ROA) is an indicator of how profitable a company is relative to its total assets.  ROA gives an idea  as to how efficient  management is  at using its assets to generate earnings. In year 2011, ROA of CIMB Group and May Bank is 1.36% and 1.12% respectively. That indicating CIMB Group earning about RM 0.014 for each ringgit in assets and May Bang earning about RM 0.011 for each ringgit in assets. Comparing the numbers, CIMB Group have a bit higher of ROA compare to May Bank. This mean the CIMB Group gets higher return generated in relative to the total capital provided than May Bank. Thus, CIMB Group is doing better in investment to generate profits than May Bank. Net interest margin is a performance metric that examines how  successful a corporations investment  decisions are compared to its debt situations. The net interest margin of CIMB Group is 2.22% and interest margin for May bank is 1.7%. Both of them have generated positive value of net interest margin. It denotes that the firms make an optimal decision, because the amount of returns  generated were greater than interest expenses by investments. The figure also show CIMB Group have a higher net interest margin compare to May Bank. This mean CIMB Group is more optimal in making decision. Net non-interest margin is the measurement of the amount of non-interest revenues of the financial firm has been able to collect relative to the amount of noninterest cost incurred. In 2011, the net non-interest margin of CIMB Group and May Bank is 1.32% and 1% relatively. The result shown that, CIMB Group has higher net non-interest margin. This mean CIMB Group shows better performance in the non-interest revenue compare to May Bank. Net operating margin is a measurement of what proportion of a companys revenue is left over after paying for variable costs of production. The net operating margin for CIMB Group is 3.55% and net operating margin for May Bank is 1.49% in 2011. This  means that  CIMB Group makes RM 0.036 for every ringgit of sales and May Bank makes RM 0.015 for every ringgit sales. In the calculation show CIMB Group have better operating performance compare to May Bank. 4Earnings per share are a portion of a corporations profit allocated to each outstanding share of common stock. EPS for CIMB Group is 54.2sen and EPS for May Bank is 61.4sen.Both companies have the positive value of EPS which indicates that they are earning profit. According to the EPS between these two banks, we found that May Bank has higher EPS compare to CIMB Group. This indicated that May Bank has higher profitability and performance compare to CIMB Group. Earning spread is measurement of the effectiveness of a financial firms intermediation function in borrowing and lending money and also the intensity of competition in the firms market area. In 2011, CIMB Group has 3.12% of earning spread and May Bank has 2.17% of earning spread. According to the calculation, CIMB Group has higher earning spread compare to May Bank. This mean CIMB Group is more effective in borrowing and lending money and also the intensity of competition in the firms market area compare to May Bank. 5 Risk Analysis for Maybank Malayan Banking Berhad (Maybank) is the largest bank in Malaysia. Maybank provides a wide range of service and product to their customers. Over the decades, just like other financial institutions, Maybank had undergone technologic revolution. Nowadays, customer can made transaction or payment through bank electronic support system (maybank2u). However, this system may not operate or function well. For example, some users have experience of unable to receive Maybank TAC number through their phone. It causes inconvenience to Maybanks customer because they cannot made payment via Maybank2u. There was an announcement on Maybank website. It stated, We are experiencing a general issue with TACs from Maybank2u at the moment. Some users may experience delays in receiving it. The bank is exposed to operation risk. Besides that, public receive fraudulent telephone calls, emails or SMS claiming to be from Maybank. Those fraudulent telephone calls, emails or SMS will request personal and confidential account details such as personal identification number, passwords, conformation of credit card transaction, and so on from Maybank user. Consequently, their money or credit card will be embezzled. There are a few cases happened and it caused public unconfident to bank. Public will start panic and withdraw money from bank. The banks liquidity condition will decrease and be exposed to liquidity risk. According to Perbadanan Insurance Deposit Malaysia (PIDM), on May 2010, the Prime Minister Dato Sri MohdNajib bin Tun Haji Abdul Razak, who is also our Finance Minister, had announced the increase in deposit insurance limit to RM 250,000 from previous RM 60,000 with effective from 31st December 2010. This will create credit risk, management risk and liquidity risk. Increase of deposit insurance limit cause moral hazard problem to rise. Maybank may invest in through those risky investments or holding risky asset. Maybank will be less incentive to protect banks benefit and does not mind to hold high risky asset because PIDM will pay off insurance limit up to RM 250,000. 6Next, Bank Negara Malaysia (BNM) announces to raise Statutory Reserve Requirement (SRR) Ratio from 3.00% to 4.00%, effective from 16 July 2011. Increased in Reserve Requirement Ratio causes Maybank ability to lending out decreased. Thus, Maybank become more prudent in approving loan with appropriate review and documentation. It can be seen by nonperforming loan ratio of 1.30% in 2011 compare with 1.63% in 2010. When nonperforming loan ratios of Maybank decrease, it will cause credit risk to reduce too. Risk weighted capital ratio of Maybank in 2011 is 15.36% (assuming full investment of Dividend Reinvestment Plan). Thus, it will expose to liquidity risk and credit risk. Risk weight also known as the capital adequacy framework. The minimum regulatory of capital adequacy requirement for the risk weighted capital ratio is 8% according to BNM rules. Since risk weighted capital ratio of Maybank exceed minimum requirement of capital adequacy, we can know that Maybank is well-capitalized. Maybank has good liquidity condition and Maybank can loan this amount of money to generate more outcome. However, it will increase the credit risk. Maybank may lend loan to those customer who has high risk. Furthermore, The star in 2011 stated that Maybank had issue RM 1 billion of subordinated notes under its notes programmer of up to RM 3 billion. Maybank said the subordinated notes including two tranche. Tranche 1 is RM 750million with tenure of 10 years on a 10 non-called 5 basis and Tranche 2 is RM 250 million with tenure of 12 years on 12 non-called 7 basis. The subordinated notes received a strong support from investors. Capital of Maybank will increase by issuing RM 1 billion of notes. Hence, Maybank will not be facing capital risk.Maybank may face market risk due to change of market risk. Maybank may be able to determine the interest rate. Consequently, Maybank will face significant losses. 7From 11 May 2011, Maybank announced to increase its deposit and base lending rates (BLR). Deposits rate will be risen up to 30 basis points. However BLR will increase by 30 basis to 6.60% p.a from previous 6.30% p.a. It will give impact to capital risk, liquidity risk and market risk. BLR is the cost of borrowing money. Increasing BLR causes addition payment added on shoulder on borrower. Deposit rate increase will attract depositors keep their money in Maybank. From Maybank annual report 2011, the Groups customer deposit grew 19.0% to RM 282.0billion while it increased 14.9% to RM 201.5billion at the Bank level. Last but not least, Maybank also exposed to market risk. In June 2011, Board of director of Maybank had declared that they had stopped the plan that take over RHB Capital Bhd and would not to pursue the possible merger at this movement. When this merger negotiations breakdown, Maybank share price had decline 2 sen to close at RM 8.82. It is due to great disappointment from investors. Risk Analysis for CIMB Group As we know, every business will contain risk while for CIMB Group will also wont be in the exception. To prevent those losses, CIMB Group had employed Enterprise-Wide Risk Management Framework to manage the risk that might face by CIMB Group since year 2008. At first, the most common risk that will face by a bank is credit risk. Credit risk is the risk which means the declining of assets value for a firm while the loan is one of the important assets that will face this type of risk for a bank. For CIMB Group, they had done some analysis to analyst the credit risk and try to figure out the way to reduce the increasing of the credit risk such as geographic distribution. Geographic distribution is the way of managing the portfolio differently according each country and the value of the loan that provide for each country also will be different. For example the CIMBBG in Malaysia and Singapore, due to headquartered for CIMB Group is in Kuala Lumpur, Malaysia therefore the main credit exposure is much higher for Malaysia which is RM191435925000 compare to Singapore which only have RM16373165000. Besides that, the group risk management will monitor the establish credit limits by daily in tend to reduce the credit risk that will take by CIMB too . 8 Next is about the liquidity risk. Liquidity risk is about the probability of firm cant transform the assets into fund in order to make profit or other purpose. From the risk weighted capital ratio that stated by CIMB Group in year 2011 is around 16.8% which shows the increase of value compare to year 2010 which only have 15%. The announcement of the amount will cause reduce of chance for CIMB Group expose to the liquidity risk and also credit risk as well. According the rule set by Bank Negara Malaysia which is the minimum regulatory of capital adequacy requirement for the risk weighted capital ratio is 8%. By comparing, we will know that the liquidity of CIMB Group is in good condition which will reduce the liquidity risk and credit risk too. This will increase the confident of customers and investment towards CIMB Group in the same time. On April year 2011, one of the articles from The Star state out that CIMB Group had earlier secured several US dollar term loan facilities but will all-in pricing of 0.9%-0.98% per annum which is above the London Interbank Offered Rate (LIBOR). This situation tends let CIMB Group get reduce for liquidity risk but will increase the debt of CIMB Group. On 30 September 2011, CIMB Group announced that they had a market capitalisation of approximately RM51.8 billion. With this high amount of market capitalisation, the liquidity risk that will face by CIMB Group will be greatly reduced at the same time CIMB Group also been proved that dint face the capital risk as well. Besides that, markets risk also one of the risks that might face by CIMB Group. Market risk is about the probability that the firm loss the position in the market which is the value of firms investment portfolio declining due to economic changes or some of the events that will impact the market. In year 2011, the Dato Sri NazirRazak, Group Chief Executive of CIMB Group stated Our primary disappointment was our share price which significantly underperformed benchmarks. This situation will cause CIMB Group expose to the market risk. Public or investor might lose confidence toward CIMB Group. 9At 2nd February year 2011, one of the article stated that CIMB said industrial production in emerging markets is growing faster than these developed countries, and this will support liquidity flows into the emerging markets. Furthermore, on April year 2011, CIMB Group had involve in sukuk issue and in the same time CIMB Group deputy CEO and  treasurer, Datuk Lee K Kwan come out with a statement The current market environment remains very conducive for corporate issuers including banks to tap the fixed income markets. This kind of statements will tend to reduce the market risk that face by CIMB Group Reputation risk is about the negative publicity impact that might cause the customers of the firm not to use the services of the firm. For CIMB Group, in order to reduce this kind of risk, some of the activities were carry out in tend to maintain good relation with public. In year 2011, Breakthrough brought a van for benefit of 10 farming families in a remote village near Lundu in Sarawak by getting the fund that provide by CIMB Foundation. This activity had reduced the chance that CIMB Group expose in reputation risk. Operational risk is about the losses that cause by the failures happen in organisations internal activities. Basel II Pillar 3 disclosures for year 2011 stated that in July 2011,CIMB Group had strengthened their infrastructure and the operational risk management department had been created to taking care the measure of operational risk for CIMB Group. Therefore CIMB Group has greater chance to reduce the operational risk. 10 Recommendation for Maybank Market risk Market risk composes by 4 elements which are interest rate risk, foreign exchange rate risk, commodity price risk and equity price risk. In order to reduce the risk, Maybank need to determine whether Maybank has interest-sensitive assets or interest-sensitive liabilities in the period. Market rate of interest is determined by the market and bank only can become price taker and accept the interest rate given. If Maybank has interest-sensitive asset, Maybank will suffer losses if the interest rates decrease. While if Maybank has interest-sensitive liabilities, Maybank will suffer losses if the interest risk increase. From the annual report of Maybank, we can know that Maybank has a negative gap of cumulative interest rate. In order to reduce the risk, Maybank should try to increase interest-sensitive assets and reduce interest-sensitive liabilities. Besides that, Maybank can use various hedging tools to reduce the effect of the currency exposure in the appropriate circumstance. In addi tion, Maybank can reduce the exposure to market risk through swaps and features or offset it from the on and off balance sheet activities. Operating risk Fraud management is the main cause of increase operating risk. In year 2011, there are many cases about fraudulent telephone calls, SMS or emails requiring bank users personal financial information. Many people had been cheated and lost a huge amount of money. This cases increase the fear of public. To minimize the fraud, announcement made on the website of Maybank is insufficient. Maybank should undertake a series of initiatives to ensure that the risk arising from the fraud can be reduced as lower as possible. Maybank can implement Anti-Fraud Road show, Awareness Programme and Introduction of Fraud rules to reduce the fraud. It can increase the awareness from public about fraud and criminal activities. Besides that, it also can prevent Maybank employees cooperate with those criminal group by disclose Maybank users information. 11 Credit Risk Maybank need to have strong emphasis in creating and enhancing credit risk awareness to reduce the exposure of credit risk. Besides that, Maybank also need to maintain weighted capital ratio at 8%. In order to minimize the credit risk, Maybank should be more prudent on screening borrowers application, repayment ability of borrower, credit standing, valuable of collateral and guarantor of borrower. If the borrower unable to offset the loan, the collateral may reduce the credit risk as much as possible. Furthermore, Maybank can use debt restructuring to reduce non-performance loan. Banks balance sheet may be burden and facing credit risk due to increase bad loan. Those borrowers who are unable to repay the loan can negotiate with Maybank. Debt restructuring can reduce the bad loan and provide a win-win situation for Maybank and borrower who unable to offset the loan or mortgage. Liquidity Risk Basically liquidity risk can be defined as funding liquidity risk and market liquidity risk. If bank has a high level of liquidity risk, the bank will face short-of cash and bank run. According to annual report of Maybank 2011, exposure to liquidity risk can be reduced through contracting derivatives where the underlying items are widely traded. Maybank should not hold too much high risky assets because heavier use of purchased funds will cause a shortage of liquidity. Maybank also can diversify funding source to raise the fund. In the point of view, Maybank will have sufficient amount to meet those daily transaction. Furthermore, Maybank can implement a plan or strategy to handle different liquidity crisis scenarios especially during economic crisis. 12 In conclusion, those various risks that exposed in Maybank are influencing to each other. If the management is inefficiency, it will increase the management risk. The management risk will create market risk. Maybank unable to determine the market interest rate, Maybank will face significant losses and affect bank capital. Credit risk may increase and the liquidity conditions of Maybank reduce. Reduce in liquidity condition causes Maybank in ability to fully approve loan demand. Asset quality may decrease and affect the earning performance of Maybank. Recommendation for CIMB Group Credit Risk In order for CIMB Group get expose into the credit risk, CIMB Group can try to strengthen up the condition for their customer to get the loan. The detail of financial statement of customers should be checked clearly before lending out the loan. Training for the staff should carry out, so the staff can know the way how to keep follow up with the customer if the loan haven pays back on time. With this way the chance getting the charge-offs will be reduce and the losses of firm will tend to reduce too. Those analyses that done before should be carry on so that CIMB Group can easily figure out those problem customers and avoid getting into the risk. CIMB Group can even try to have the credit insurance so that they can claim from the insurance for to cover the losses. Liquidity Risk As we know, a firm with high liquidity risk will bring the firm get into bad situation. Therefore CIMB Group should try their best in managing their assets in tend to reduce the liquidity risk. By maintain or keep increasing the risk weighted capital ratio will be helpful to CIMB Group. High risk weighted capital ratio would means the firm will had large amount of cash to carry out the activities. CIMB Group can try to come out with a small group that only deal with liquidity of the firm. This might help CIMB Group to know more the liquidity status for their firm. Besides that, by not holding too much of the high risk assets can reduce the liquidity risk too. This is due to high risk assets will cause firm get losses and the fund will be stuck with those high risk assets. Maintain a good relationship with other competitors also a way to prevent increase the risk. For example, CIMB Group and May Bank maintain a good relationship. When something went wrong for CIMB Group, May Bank migh t willing to help CIMB Group in order to solve the problem. Market Risk 13Market risk had involved two types of risk which is price risk and interest rate risk. For price risk, market survey can be always carrying out for get to know the need of the market. Market survey might let CIMB Group get to know the trend of the market and the reason or fraud might affect the market price. So CIMB Group can react faster to capture the market before others. While for interest risk, CIMB Group has no choice but to be aware with the economic changes or the event that might affect the market. This can help CIMB Group to avoid suffer from the losses due to the changes of interest rate. Operational Risk For CIMB Group to avoid the operational risk, one of the solutions is training for the staff. After training, those staffs will be knows well for the whole operating system of CIMB Group. Therefore the human mistake can be lower down. For those data and networking system should be always been taking care by those qualify skilled worker. This will be lower down the system error that might occur. Even got error occur, those qualify skilled worker can try to fix it in time to prevent the huge amount of losses. For those ATM machine should always been taking care for time to time to prevent error occur and can repair in time if any problem with the machine. Reputation Risk CIMB Group should always be aware when dealing with public. This is because if misunderstanding or problem occur will tend bring negative impact to the image of CIMB Group. When this situation happen, the confident of customer towards CIMB Group will be pulling down and CIMB Group will expose in the reputation risk as well. Therefore those activities such as raising fund for people who needed be carry out in order to build up a good image for the public. As conclusion, to prevent CIMB Group get into different type of risks, the risk management department in CIMB Group had played the important role. Risk measurement should be done time by time so that when problem occur, the series of actions can be taken just in time. While for the insurance is to cover the losses when something unexpected occur so that the firm can be more focus on their main business. 14

Friday, October 25, 2019

Motion Picture Special Effects Essay -- Film Movie Essays

Motion Picture Special Effects â€Å"Special visual effects have added to the allure of motion pictures since the early days of cinema. French director Georges Mà ©lià ¨s is considered the most influential pioneer of special effects. His film â€Å"A Trip to the Moon† combined live action with animation, demonstrating to audiences that cinema could create worlds, objects, and events that did not exist in real life† (Tanis par. 1). Through examples of the new techniques and the movies where they were presented, this paper will detail the changes that special effects have seen over the last twenty-five years. Special effects have been used ever since the film industry became popular. Three-dimensional film technology became popular in the1950s, when it enjoyed a brief period of use (Sklar par. 3). Although motion-picture film, like still photography, normally yields two-dimensional images, the illusion of a third dimension can be achieved by projecting two separate movies. Members of the audience wear 3-D eyeglasses so that the right eye sees one picture and the left eye sees the other, producing the effect of three dimensions. Three-dimensional film technology is still being used today at Universal Studios in Florida. When my family visited the amusement park there was a feature 3D film that was rendition of â€Å"The Terminator.† Three-dimensional film has changed, because now the members of the audience no longer have to wear glasses with one red and one blue lens. Now the glasses are clear, but still allow the user to get the same three-dimensional effect that they would the red and blue glasses. Another example of the lasting power of early techniques is stop-motion photography. The original â€Å"King Kong† used this technique, in which the King Kong figurine was repeatedly filmed for very brief segments and then moved, so that when the film was projected at normal speed, King Kong appeared to move. The same technique animated the figures in â€Å"James and the Giant Peach† (â€Å"Nova† par. 2). After World War II there was a lull in the development and use of special effects. Technical advances in the design and manufacture of motion-picture cameras made it easier to film on actual locations, and the trend in cinematic storytelling tended toward realism, resulting in less call for fantastic illusions. Then in 1968 the film â€Å"2001: A Space Odyssey†, in which astronauts ap... ... Works Cited Tanis, Nicholas. "Motion Picture," Microsoft Encarta Online Encyclopedia 2000 October 12, 2000 <http://encarta.msn.com>. Sklar, Robert. " History of Motion Pictures, " Microsoft Encarta Online Encyclopedia 2000 October 24, 2000 <http://encarta.msn.com>. Nova Online. â€Å"The Grand Illusion: A Century of Special Effects,† Nova Online 1996. October 12, 2000 < http://www.pbs.org/wgbh/nova/specialfx/effects/history.html>. Hayes, R.M. Trick Cinematography: The Oscar Special-Effects Movies. North Carolina: McFarland, 1986. Erland, Jonathan, and Kay Erland. â€Å"The Digital Series Traveling Matte Backings† Composition Components Company October 12, 2000 <http://www.digitalgreenscreen.com/NoFrame/ tmatte.html>. Thalmann, Nadia, and Daniel Thalmann, eds. New Trends in Animation and Visualization. New York: Wiley, 1991. La Franco, Robert. â€Å"Digital Dreamin’.† Forbes Sept. 1998: 223. Kaplan, David A. â€Å"Grand Illusions.† Newsweek Online 1996: October 12, 2000 <http://www.newsweek.com>. Howstuffworks Online. â€Å"Developing The Matrix,† Howstuffworks Online. 1999. October 14, 2000 <http://howstuffworks.com/framed.htm?parent=Matrix>.

Thursday, October 24, 2019

Chameleon Chips

INTRODUCTION Today's microprocessors sport a general-purpose design which has its own advantages and disadvantages. ? Adv: One chip can run a range of programs. That's why you don't need separate computers for different jobs, such as crunching spreadsheets or editing digital photos ? Disadv: For any one application, much of the chip's circuitry isn't needed, and the presence of those â€Å"wasted† circuits slows things down. Suppose, instead, that the chip's circuits could be tailored specifically for the problem at hand–say, computer-aided design–and then rewired, on the fly, when you loaded a tax-preparation program. One set of chips, little bigger than a credit card, could do almost anything, even changing into a wireless phone. The market for such versatile marvels would be huge, and would translate into lower costs for users. So computer scientists are hatching a novel concept that could increase number-crunching power–and trim costs as well. Call it the chameleon chip. Chameleon chips would be an extension of what can already be done with field-programmable gate arrays (FPGAS). An FPGA is covered with a grid of wires. At each crossover, there's a switch that can be semipermanently opened or closed by sending it a special signal. Usually the chip must first be inserted in a little box that sends the programming signals. But now, labs in Europe, Japan, and the U. S. are developing techniques to rewire FPGA-like chips anytime–and even software that can map out circuitry that's optimized for specific problems. The chips still won't change colors. But they may well color the way we use computers in years to come. it is a fusion between custom integrated circuits and programmable logic. in the case when we are doing highly performance oriented tasks custom chips that do one or two things spectacularly rather than lot of things averagely is used. Now using field programmed chips we have chips that can be rewired in an instant. Thus the benefits of customization can be brought to the mass market. [pic]A reconfigurable processor is a microprocessor with erasable hardware that can rewire itself dynamically. This allows the chip to adapt effectively to the programming tasks demanded by the particular software they are interfacing with at any given time. Ideally, the reconfigurable processor can transform itself from a video chip to a central processing unit (cpu) to a graphics chip, for example, all optimized to allow applications to run at the highest possible speed. The new chips can be called a â€Å"chip on demand. † In practical terms, this ability can translate to immense flexibility in terms of device functions. For example, a single device could serve as both a camera and a tape recorder (among numerous other possibilities): you would simply download the desired software and the processor would reconfigure itself to optimize performance for that function. Reconfigurable processors, competing in the market with traditional hard-wired chips and several types of programmable microprocessors. Programmable chips have been in existence for over ten years. Digital signal processors (DSPs), for example, are high-performance programmable chips used in cell phones, automobiles, and various types of music players. Another version, programmable logic chips are equipped with arrays of memory cells that can be programmed to perform hardware functions using software tools. These are more flexible than the specialized DSP chips but also slower and more expensive. Hard-wired chips are the oldest, cheapest, and fastest – but also the least flexible – of all the options. Chameleon chips Highly flexible processors that can be reconfigured remotely in the field, Chameleon's chips are designed to simplify communication system design while delivering increased price/performance numbers. The chameleon chip is a high bandwidth reconfigurable communications processor (RCP). it aims at changing a system's design from a remote location. This will mean more versatile handhelds. Processors operate at 24,000 16-bit million operations per second (MOPS), 3,000 16-bit million multiply-accumulates per second (MMACS), and provide 50 channels of CDMA2000 chip-rate processing. The 0. 25-micron chip, the CS2112 is an example. These new chips are able to rewire themselves on the fly to create the exact hardware needed to run a piece of software at the utmost speed. an example of such kind of a chip is a chameleon chip. this can also be called a â€Å"chip on demand† â€Å"Reconfigurable computing goes a step beyond programmable chips in the matter of flexibility. It is not only possible but relatively commonplace to â€Å"rewrite† the silicon so that it can perform new functions in a split second. Reconfigurable chips are simply the extreme end of programmability. † The overall performance of the ACM can surpass the DSP because the ACM only constructs the actual hardware needed to execute the software, whereas DSPs and microprocessors force the software to fit its given architecture. One reason that this type of versatility is not possible today is that handheld gadgets are typically built around highly optimized specialty chips that do one thing really well. These chips are fast and relatively cheap, but their circuits are literally written in stone — or at least in silicon. A multipurpose gadget would have to have many specialized chips — a costly and clumsy solution. Alternately, you could use a general-purpose microprocessor, like the one in your PC, but that would be slow as well as expensive. For these reasons, chip designers are turning increasingly to reconfigurable hardware—integrated circuits where the architecture of the internal logic elements can be arranged and rearranged on the fly to fit particular applications. Designers of multimedia systems face three significant challenges in today's ultra-competitive marketplace: Our products must do more, cost less, and be brought to the market quicker than ever. Though each of these goals is individually attainable, the hat trick is generally unachievable with traditional design and implementation techniques. Fortunately, some new techniques are emerging from the study of reconfigurable computing that make it possible to design systems that satisfy all three requirements simultaneously. Although originally proposed in the late 1960s by a researcher at UCLA, reconfigurable computing is a relatively new field of study. The decades-long delay had mostly to do with a lack of acceptable reconfigurable hardware. Reprogrammable logic chips like field programmable gate arrays (FPGAs) have been around for many years, but these chips have only recently reached gate densities making them suitable for high-end applications. (The densest of the current FPGAs have approximately 100,000 reprogrammable logic gates. ) With an anticipated doubling of gate densities every 18 months, the situation will only become more favorable from this point forward. The primary product is a groundstation equipment for satellite communications. This application involves high-rate communications, signal processing, and a variety of network protocols and data formats. ADVANTAGES AND APPLICATIONS Its applications are in, ? data-intensive Internet ? DSP ? wireless basestations ? voice compression ? software-defined radio ? high-performance embedded telecom and datacom applications ? xDSL concentrators ? fixed wireless local loop ? multichannel voice compression ? multiprotocol packet and cell processing protocols Its advantages are ? can create customized communications signal processors ? increased erformance and channel count ? can more quickly adapt to new requirements and standards ? lower development costs and reduce risk. FPGA One of the most promising approaches in the realm of reconfigurable architecture is a technology called â€Å"field-programmable gate arrays. † The strategy is to build uniform arrays of thousands of logic elements, each of which can take on the personality of different, fundamental component s of digital circuitry; the switches and wires can be reprogrammed to operate in any desired pattern, effectively rewiring a chip's circuitry on demand. A designer can download a new wiring pattern and store it in the chip's memory, where it can be easily accessed when needed. Not so hard after all Reconfigurable hardware first became practical with the introduction a few years ago of a device called a â€Å"field-programmable gate array† (FPGA) by Xilinx, an electronics company that is now based in San Jose, California. An FPGA is a chip consisting of a large number of â€Å"logic cells†. These cells, in turn, are sets of transistors wired together to perform simple logical operations. Evolving FPGAs FPGAs are arrays of logic blocks that are strung together through software commands to implement higher-order logic functions. Logic blocks are similar to switches with multiple inputs and a single output, and are used in digital circuits to perform binary operations. Unlike with other integrated circuits, developers can alter both the logic functions performed within the blocks and the connections between the blocks of FPGAs by sending signals that have been programmed in software to the chip. FPGA blocks can perform the same high-speed hardware functions as fixed-function ASICs, and—to distinguish them from ASICs—they can be rewired and reprogrammed at any time from a remote location through software. Although it took several seconds or more to change connections in the earliest FPGAs, FPGAs today can be configured in milliseconds. Field-programmable gate arrays have historically been applied as what is called glue logic in embedded systems, connecting devices with dissimilar bus architectures. They have often been used to link digital signal processors—cpus used for digital signal processing—to general-purpose cpus. The growth in FPGA technology has lifted the arrays beyond the simple role of providing glue logic. With their current capabilities, they clearly now can be classed as system-level components just like cpus and DSPs. The largest of the FPGA devices made by the company with which one of the authors of this article is affiliated, for example, has more than 150 billion transistors, seven times more than a Pentium-class microprocessor. Given today's time-to-market pressures, it is increasingly critical that all system-level components be easy to integrate, especially since the phase involving the integration of multiple technologies has become the most time-consuming part of a product's development cycle. To Integrating Hardware and Software systems designers producing mixed cpu and FPGA designs can take advantage of deterministic real-time operating systems (RTOSs). Deterministic software is suited for controlling hardware. As such, it can be used to efficiently manage the content of system data and the flow of such data from a cpu to an FPGA. FPGA developers can work with RTOS suppliers to facilitate the design and deployment of systems using combinations of the two technologies. FPGAs operating in conjunction with embedded design tools provide an ideal platform for developing high-performance reconfigurable computing solutions for medical instrument applications. The platform supports the design, development, and testing of embedded systems based on the C language. Integration of FPGA technology into systems using a deterministic RTOS can be streamlined by means of an enhanced application programming interface (API). The blending of hardware, firmware, application software, and an RTOS into a platform-based approach removes many of the development barriers that still limit the functionality of embedded applications. Development, profiling, and analysis tools are available that can be used to analyze computational hot spots in code and to perform low-level timing analysis in multitasking environments. One way developers can use these analytical tools is to determine when to design a function in hardware or software. Profiling enables them to quickly identify functionality that is frequently used or computationally intensive. Such functions may be prime candidates for moving from software to FPGA hardware. An integrated suite of run-time analysis tools with a run-time error checker and visual interactive profiler can help developers create higher-quality, higher-performance code in little time. An FPGA consists of an array of configurable logic blocks that implement the logical functions. In FPGA's, the logic functions performed within the logic blocks, and sending signals to the chip can alter the connections between the blocks. These blocks are similar in structure to the gate arrays used in some ASIC's, but whereas standard gate arrays are configured and fixed during manufacture, the configurable logic blocks in new FPGA's can be rewired and reprogrammed repeatedly in around a microsecond. One advantages of FPGA is that it needs small time to market Flexibility and Upgrade advantages Cheap to make . We can configure an FPGA using Very High Density Language [VHDL] Handel C Java . FPGA’s are used presently in Encryption Image Processing Mobile Communications . FPGA’s can be used in 4G mobile communication The advantages of FPGAs are that Field programmable gate arrays offer companies the possibility of develloping a chip very quickly, since a chip can be configured by software. A chip can also be reconfigured, either during execution time, or as part of an upgrade to allow new applications, simply by loading new configuration into the chip. The advantages can be seen in terms of cost, speed and power consumption. The added functionality of multi-parallelism allows one FPGA to replace multiple ASIC’s. The applications of FPGA’s are in ? image processing ? encryption ? mobile communication memory management and digital signal processing ? telephone units ? mobile base stations. Although it is very hard to predict the direction this technology will take, it seems more than likely that future silicon chips will be a combination of programmable logic, memory blocks and specific function blocks, such as floating point units. It is hard to predict at this early stage, but it lo oks likely that the technology will have to change over the coming years, and the rate of change for major players in todays marketplace such as Intel, Microsoft and AMD will be crucial to their survival. The precise behaviour of each cell is determined by loading a string of numbers into a memory underneath it. The way in which the cells are interconnected is specified by loading another set of numbers into the chip. Change the first set of numbers and you change what the cells do. Change the second set and you change the way they are linked up. Since even the most complex chip is, at its heart, nothing more than a bunch of interlinked logic circuits, an FPGA can be programmed to do almost anything that a conventional fixed piece of logic circuitry can do, just by loading the right numbers into its memory. And by loading in a different set of numbers, it can be reconfigured in the twinkling of an eye. Basic reconfigurable circuits already play a huge role in telecommunications. For instance, relatively simple versions made by companies such as Xilinx and Altera are widely used for network routers and switches, enabling circuit designs to be easily updated electronically without replacing chips. In these early applications, however, the speed at which the chips reconfigure themselves is not critical. To be quick enough for personal information devices, the chips will need to completely reconfigure themselves in a millisecond or less. â€Å"That kind of chameleon device would be the killer app of reconfigurable computing† These experts predict that in the next couple of years reconfigurable systems will be used in cell phones to handle things like changes in telecommunications systems or standards as users travel between calling regions — or between countries. As it is getting more expensive and difficult to pattern, or etch, the elaborate circuitry used in microprocessors; many experts have predicted that maintaining the current rate of putting more circuits into ever smaller spaces will, sometime in the next 10 to 15 years, result in features on microchips no bigger than a few atoms, which would demand a nearly impossible level of precision in fabricating circuitry But reconfigurable chips don't need that type of precision and we can make computers that function at the nanoscale level. CS2112 (a reconfigurable processor developed by chameleon systems) RCP architecture is designed to be as flexible as an FPGA, and as easy to program as a digital signal processor (DSP), with real-time, visual debugging capability. The development environment, comprising Chameleon's C-SIDE software tool suite and CT2112SDM development kit, enables customers to develop and debug communication and signal processing systems running on the RCP. The RCP's development environment helps overcome a fundamental design and debug challenge facing communication system designers. In order to build sufficient performance, channel capacity, and flexibility into their systems, today's designers have been forced to employ an amalgamation of DSPs, FPGAs and ASICs, each of which requires a unique design and debug environment. The RCP platform was designed from the ground up to alleviate this problem: first by significantly exceeding the performance and channel capacity of the fastest DSPs; second by integrating a complete SoC subsystem, including an embedded microprocessor, PCI core, DMA function, and high-speed bus; and third by consolidating the design and debug environment into a single platform-based design system that affords the designer comprehensive visibility and control. The C-SIDE software suite includes tools used to compile C and assembly code for execution on the CS2112's embedded microprocessor, and Verilog simulation and synthesis tools used to create parallel datapath kernels which run on the CS2112's reconfigurable processing fabric. In addition to code generation tools, the package contains source-level debugging tools that support simulation and real-time debugging. Chameleon's design approach leverages the methods employed by most of today's communications system designers. The designer starts with a C program that models signal processing functions of the baseband system. Having identified the dataflow intensive functional blocks, the designer implements them in the RCP to accelerate them by 10- to 100-fold. The designer creates equivalent functions for those blocks, called kernels, in Chameleon's reconfigurable assembly language-like design entry language. The assembler then automatically generates standard Verilog for these kernels that the designer can verify with commercial Verilog simulators. Using these tools, the designer can compare testbench results for the original C functions with similar results for the Verilog kernels. In the next phase, the designer synthesises the Verilog kernels using Chameleon's synthesis tools targeting Chameleon technology. At the end, the tools output a bit file that is used to configure the RCP. The designer then integrates the application level C code with Verilog kernels and the rest of the standard C function. Chameleon's C-SIDE compiler and linker technology makes this integration step transparent to the designer. The CS2112 development environment makes all chip registers and memory locations accessible through a development console that enables full processor-like debugging, including features like single-stepping and setting breakpoints. Before actually productising the system, the designer must often perform a system-level simulation of the data flow within the context of the overall system. Chameleon's development board enables the designer to connect multiple RCPs to other devices in the system using the PCI bus and/or programmable I/O pins. This helps prove the design concept, and enables the designer to profile the performance of the whole basestation system in a real-world environment. With telecommunications OEMs facing shrinking product life cycles and increasing market pressures, not to mention the constant flux of protocols and standards, it's more necessary than ever to have a platform that's reconfigurable. This is where the chameleon chips are going to make its effect felt. The Chameleon CS2112 Package is a high-bandwidth, reconfigurable communications processor aimed at ? second- and third-generation wireless base stations fixed point wireless local loop (WLL) ? voice over IP ? DSL(digital subscriber line) ? High end dsp operations ? 2G-3G wireless base stations ? software defined radio ? security processing â€Å"Traditional solutions such as FPGAs and DSPs lack the performance for high-bandwidth applications, and fixed function solutions like ASICs incur unacceptable limits Each product in the CS2000 family has the same fundamental functional blocks: a 32-bit RISC processor, a full-featured memory controller, a PCI controller, and a reconfigurable processing fabric, all of which are interconnected by a high-speed system bus. The above mentioned fabric comprises an array of reconfigurable tiles used to implement the desired algorithms. Each tile contains seven 32-bit reconfigurable datapath units, four blocks of local store memory, two 16Ãâ€"24-bit multipliers, and a control logic unit. Basic Architecture [pic] Components: ? 32-bit Risc ARC processor @125MHz ? 64 bit memory controller ? 32 bit PCI controller ? reconfigurable processing fabric (RPF) ? high speed system bus ? programmable I/O (160 pins) ? DMA Subsystem ? Configuration Subsystem More on the architecture of RPF 4 Slices with 3 Tiles in each. Each tile can be reconfigured at runtime Tiles contain : †¢ Datapath Units †¢ Local Store Memories †¢ 16Ãâ€"24 multipliers †¢ Control Logic Unit The C-SIDE design system is a fully integrated tool suite, with C compiler, Verilog synthesizer, full-chip simulator, as well as a debug and verification environment — an element not readily found in ASIC and FPGA design flows, according to Chameleon. Still, reconfigurable chips represent an attempt to combine the best features of hard-wired custom chips, which are fast and cheap, and programmable logic device (PLD) chips, which are flexible and easily brought to market. Unlike PLDs, QuickSilver's reconfigurable chips can be reprogrammed every few nanoseconds, rewiring circuits so they are processing global positioning satellite signals one moment or CDMA cellular signals the next, Think of the chips as consisting of libraries with preset hardware designs and chalkboards. Upon receiving instructions from software, the chip takes a hardware component from the library (which is stored as software in memory) and puts it on the chalkboard (the chip). The chip wires itself instantly to run the software and dispatches it. The hardware can then be erased for the next cycle. With this style of computing, its chips can operate 80 times as fast as a custom chip but still consume less power and board space, which translates into lower costs. The company believes that â€Å"soft silicon,† or chips that can be reconfigured on the fly, can be the heart of multifunction camcorders or digital television sets. With programmable logic devices, designers use inexpensive software tools to quickly develop, simulate, and test their designs. Then, a design can be quickly programmed into a device, and immediately tested in a live circuit. The PLD that is used for this prototyping is the exact same PLD that will be used in the final production of a piece of end equipment, such as a network router, a DSL modem, a DVD player, or an automotive navigation system. The two major types of programmable logic devices are field programmable gate arrays (FPGAs) and complex programmable logic devices (CPLDs). Of the two, FPGAs offer the highest amount of logic density, the most features, and the highest performance FPGAs are used in a wide variety of applications ranging from data processing and storage, to instrumentation, telecommunications, and digital signal processing. To overcome these limitations and offer a flexible, cost-effective solution, many new entrants to the DSP market are extolling the virtues of configurable and reconfigurable DSP designs. This latest breed of DSP architectures promises greater flexibility to quickly adapt to numerous and fast-changing standards. Plus, they claim to achieve higher performance without adding silicon area, cost, design time, or power consumption. In essence, because the architecture isn't rigid, the reconfigurable DSP lets the developer tailor the hardware for a specific task, achieving the right size and cost for the target application. Moreover, the same platform can be reused for other applications. Because development tools are a critical part of this solution—in fact, they're true enablers—the newcomers also ensure that the tools are robust and tightly linked to the devices' flexible architectures. While providing an intuitive, integrated development environment for the designers, the manufacturers ensure affordability as well. RECONFIGURING THE ARCHITECTURE Some of the new configurable DSP architectures are reconfigurable too—that is, developers can modify their landscape on the fly, depending on the incoming data stream. This capability permits dynamic reconfigurability of the architecture as demanded by the application. Proponents of such chips are proclaiming an era of â€Å"chip-on-demand,† wherein new algorithms can be accommodated on-chip in real time via software. This eliminates the cumbersome job of fitting the latest algorithms and protocols into existing rigid hardware. A reconfigurable communications processor (RCP) can reconfigured for different processing algorithms in one clock cycle. Chameleon designers are revising the architecture to create a chip that can address a much broader range of applications. Plus, the supplier is preparing a new, more user-friendly suite of tools for traditional DSP designers. Thus, the company is dropping the term reconfigurability for the new architecture and going with a more traditional name, the streaming data processor (SDP). Though the SDP will include a reconfigurable processing fabric, it will be substantially altered, the company says. Unlike the older RCP, the new chip won't have the ARM RISC core, and it will support a much higher clock rate. Additionally, it will be implemented in a 0. 13- µm CMOS process to meet the signal processing needs of a much broader market. Further details await the release of SDP sometime in the first quarter of 2003. While Chameleon is in the redesign mode, QuickSilver Technologies is in the test mode. This reconfigurable proponent, which prefers to call its architecture an adaptive computing machine or ACM, has realized its first silicon test chip. In fact, the tests indicate that it outperforms a hardwired, fixed-function ASIC in processing compute-intensive cdma2000 algorithms, like system acquisition, rake finger, and set maintenance. For example, the ASIC's nominal speed for searching 215 phase offsets in a basic multipath search algorithm is 3. seconds. The ACM test chip took just one second at a 25-MHz clock speed to perform the same number of searches in a cdma2000 handset. Likewise, the device accomplishes over 57,000 adaptations per second in rake-finger operation to cycle through all operations in this application every 52  µs (Fig. 1). In the set-maintenance application, the chip is almost three times fa ster than an ASIC, claims QuickSilver. THE power of a computer stems from the fact that its behaviour can be changed with little more than a dose of new software. A desktop PC might, for example, be browsing the Internet one minute, and running a spreadsheet or entering the virtual world of a computer game the next. But the ability of a microprocessor (the chip that is at the heart of any PC) to handle such a variety of tasks is both a strength and a weakness—because hardware dedicated to a particular job can do things so much faster. Recognising this, the designers of modern PCs often hand over such tasks as processing 3-D graphics, decoding and playing movies, and processing sound—things that could, in theory, be done by the basic microprocessor—to specialist chips. These chips are designed to do their particular jobs extremely fast, but they are inflexible in comparison with a microprocessor, which does its best to be a jack-of-all-trades. So the hardware approach is faster, but using software is more flexible. At the moment, such reconfigurable chips are used mainly as a way of conjuring up specialist hardware in a hurry. Rather than designing and building an entirely new chip to carry out a particular function, a circuit designer can use an FPGA instead. This speeds up the design process enormously, because making changes becomes as simple as downloading a new configuration into the chip. Chameleon Systems also develops reconfigurable chips for the high-end telecom-switching market. RECONFIGURABLE PROCESSORS A reconfigurable processor is a microprocessor with erasable hardware that can rewire itself dynamically. This allows the chip to adapt effectively to the programming tasks demanded by the particular software they are interfacing with at any given time. Ideally, the reconfigurable processor can transform itself from a video chip to a central processing unit (cpu) to a graphics chip, for example, all optimized to allow applications to run at the highest possible speed. The new chips can be called a â€Å"chip on demand. † In practical terms, this ability can translate to immense flexibility in terms of device functions. For example, a single device could serve as both a camera and a tape recorder (among numerous other possibilities): you would simply download the desired software and the processor would reconfigure itself to optimize performance for that function. Reconfigurable processors, competing in the market with traditional hard-wired chips and several types of programmable microprocessors. Programmable chips have been in existence for over ten years. Digital signal processors (DSPs), for example, are high-performance programmable chips used in cell phones, automobiles, and various types of music players. While microprocessors have been the dominant devices in use for general-purpose computing for the last decade, there is still a large gap between the computational efficiency of microprocessors and custom silicon. Reconfigurable devices, such as FPGAs, have come closer to closing that gap, offering a 10x benefit in computational density over microprocessors, and often offering another potential 10x improvement in yielded functional density on low granularity operations. On highly regular computations, reconfigurable architectures have a clear superiority to traditional processor architectures. On tasks with high functional diversity, microprocessors use silicon more efficiently than reconfigurable devices. The BRASS project is developing a coupled architecture which allow a reconfigurable array and processor core to cooperate efficiently on computational tasks, exploiting the strengths of both architectures. We are developing an architecture and a prototype component that will combine a processor and a high performance reconfigurable array on a single chip. The reconfigurable array extends the usefulness and efficiency of the processor by providing the means to tailor its circuits for special tasks. The processor improves the efficiency of the reconfigurable array for irregular, general-purpose computation. We anticipate that a processor combined with reconfigurable resources can achieve a significant performance improvement over either a separate processor or a separate reconfigurable device on an interesting range of problems drawn from embedded computing applications. As such, we hope to demonstrate that this composite device is an ideal system element for embedded processing. Reconfigurable devices have proven extremely efficient for certain types of processing tasks. The key to their cost/performance advantage is that conventional processors are often limited by instruction bandwidth and execution restrictions or by an insufficient number or type of functional units. Reconfigurable logic exploits more program parallelism. By dedicating significantly less instruction memory per active computing element, reconfigurable devices achieve a 10x improvement in functional density over microprocessors. At the same time this lower memory ratio allows reconfigurable devices to deploy active capacity at a finer grained level, allowing them to realize a higher yield of their raw capacity, sometimes as much as 10x, than conventional processors. The high functional density characteristic of reconfigurable devices comes at the expense of the high functional diversity characteristic of microprocessors. Microprocessors have evolved to a highly optimized configuration with clear cost/performance advantages over reconfigurable arrays for a large set of tasks with high functional diversity. By combining a reconfigurable array with a processing core we hope to achieve the best of both worlds. While it is possible to combine a conventional processor with commercial reconfigurable devices at the circuit board level, integration radically changes the i/o costs and design point for both devices, resulting in a qualitatively different system. Notably, the lower on-chip communication costs allow efficient cooperation between the processor and array at a finer grain than is sensible with discrete designs. RECONFIGURABLE COMPUTING When we talk about reconfigurable computing we’re usually talking about FPGA-based system designs. Unfortunately, that doesn’t qualify the term precisely enough. System designers use FPGAs in many different ways. The most common use of an FPGA is for prototyping the design of an ASIC. In this scenario, the FPGA is present only on the prototype hardware and is replaced by the corresponding ASIC in the final production system. This use of FPGAs has nothing to do with reconfigurable computing. However, many system designers are choosing to leave the FPGAs as part of the production hardware. Lower FPGA prices and higher gate counts have helped drive this change. Such systems retain the execution speed of dedicated hardware but also have a great deal of functional flexibility. The logic within the FPGA can be changed if or when it is necessary, which has many advantages. For example, hardware bug fixes and upgrades can be administered as easily as their software counterparts. In order to support a new version of a network protocol, you can redesign the internal logic of the FPGA and send the enhancement to the affected customers by email. Once they’ve downloaded the new logic design to the system and restarted it, they’ll be able to use the new version of the protocol. This is configurable computing; reconfigurable computing goes one step further. Reconfigurable computing involves manipulation of the logic within the FPGA at run-time. In other words, the design of the hardware may change in response to the demands placed upon the system while it is running. Here, the FPGA acts as an execution engine for a variety of different hardware functions — some executing in parallel, others in serial — much as a CPU acts as an execution engine for a variety of software threads. We might even go so far as to call the FPGA a reconfigurable processing unit (RPU). Reconfigurable computing allows system designers to execute more hardware than they have gates to fit, which works especially well when there are parts of the hardware that are occasionally idle. One theoretical application is a smart cellular phone that supports multiple communication and data protocols, though just one a time. When the phone passes from a geographic region that is served by one protocol into a region that is served by another, the hardware is automatically reconfigured. This is reconfigurable computing at its best, and using this approach it is possible to design systems that do more, cost less, and have shorter design and implementation cycles. Reconfigurable computing has several advantages. ? First, it is possible to achieve greater functionality with a simpler hardware design. Because not all of the logic must be present in the FPGA at all times, the cost of supporting additional features is reduced to the cost of the memory required to store the logic design. Consider again the multiprotocol cellular phone. It would be possible to support as many protocols as could be fit into the available on-board ROM. It is even conceivable that new protocols could be uploaded from a base station to the handheld phone on an as-needed basis, thus requiring no additional memory. ? The second advantage is lower system cost, which does not manifest itself exactly as you might expect. On a low-volume product, there will be some production cost savings, which result from the elimination of the expense of ASIC design and fabrication. However, for higher-volume products, the production cost of fixed hardware may actually be lower. We have to think in terms of lifetime system costs to see the savings. Systems based on reconfigurable computing are upgradable in the field. Such changes extend the useful life of the system, thus reducing lifetime costs. ? The final advantage of reconfigurable computing is reduced time-to-market. The fact that you’re no longer using an ASIC is a big help in this respect. There are no chip design and prototyping cycles, which eliminates a large amount of development effort. In addition, the logic design remains flexible right up until (and even after) the product ships. This allows an incremental design flow, a luxury not typically available to hardware designers. You can even ship a product that meets the minimum requirements and add features after deployment. In the case of a networked product like a set-top box or cellular telephone, it may even be possible to make such enhancements without customer involvement. RECONFIGURABLE HARDWARE Traditional FPGAs are configurable, but not run-time reconfigurable. Many of the older FPGAs expect to read their configuration out of a serial EEPROM, one bit at a time. And they can only be made to do so by asserting a chip reset signal. This means that the FPGA must be reprogrammed in its entirety and that its previous internal state cannot be captured beforehand. Though these features are compatible with configurable computing applications, they are not sufficient for reconfigurable computing. In order to benefit from run-time reconfiguration, it is necessary that the FPGAs involved have some or all of the following features. The more of these features they have, the more flexible can be the system design. Deciding which hardware objects to execute and when Swapping hardware objects into and out of the reconfigurable logic Performing routing between hardware objects or between hardware objects and the hardware object framework. Of course, having software manage the reconfigurable hardware usually means having an embedded processor or microcontroller on-board. (We expect several vendors to introduce single-chip solutions that combine a CPU core and a block of reconfigurable logic by year’s end. The embedded software that runs there is called the run-time environment and is analogous to the operating system that manages the execution of multiple software threads. Like threads, hardware objects may have priorities, deadlines, and contexts, etc. It is the job of the run-time environment to organize this information and make decisions based upon it. The reason we need a run-time environment at all is th at there are decisions to be made while the system is running. And as human designers, we are not available to make these decisions. So we impart these responsibilities to a piece of software. This allows us to write our application software at a very high level of abstraction. To do this, the run-time environment must first locate space within the RPU that is large enough to execute the given hardware object. It must then perform the necessary routing between the hardware object’s inputs and outputs and the blocks of memory reserved for each data stream. Next, it must stop the appropriate clock, reprogram the internal logic, and restart the RPU. Once the object starts to execute, the run-time environment must continuously monitor the hardware object’s status flags to determine when it is done executing. Once it is done, the caller can be notified and given the results. The run-time environment is then free to reclaim the reconfigurable logic gates that were taken up by that hardware object and to wait for additional requests to arrive from the application software. The principal benefits of reconfigurable computing are the ability to execute larger hardware designs with fewer gates and to realize the flexibility of a software-based solution while retaining the execution speed of a more traditional, hardware-based approach. This makes doing more with less a reality. In our own business we have seen tremendous cost savings, simply because our systems do not become obsolete as quickly as our competitors because reconfigurable computing enables the addition of new features in the field, allows rapid implementation of new standards and protocols on an as-needed basis, and protects their investment in computing hardware. Whether you do it for your customers or for yourselves, you should at least consider using reconfigurable computing in your next design. You may find, as we have, that the benefits far exceed the initial learning curve. And as reconfigurable computing becomes more popular, these benefits will only increase. ADVANTAGES OF RECONFIGURABILITY The term reconfigurable computing has come to refer to a loose class of embedded systems. Many system-on-a-chip (SoC) computer designs provide reconfigurability options that provide the high performance of hardware with the flexibility of software. To most designers, SoC means encapsulating one or more processing elements—that is, general-purpose embedded processors and/or digital signal processor (DSP) cores—along with memory, input/output devices, and other hardware into a single chip. These versatile chips can erform many different functions. However, while SoCs offer choices, the user can choose only among functions that already reside inside the device. Developers also create ASICs—chips that handle a limited set of tasks but do them very quickly. The limitation of most types of complex hardware devices—SoCs, ASICs, and general-purp ose cpus—is that the logical hardware functions cannot be modified once the silicon design is complete and fabricated. Consequently, developers are typically forced to amortize the cost of SoCs and ASICs over a product lifetime that may be extremely short in today's volatile technology environment. Solutions involving combinations of cpus and FPGAs allow hardware functionality to be reprogrammed, even in deployed systems, and enable medical instrument OEMs to develop new platforms for applications that require rapid adaptation to input. The technologies combined provide the best of both worlds for system-level design. Careful analysis of computational requirements reveals that many algorithms are well suited to high-speed sequential processing, many can benefit from parallel processing capabilities, and many can be broken down into components that are split between the two. With this in mind, it makes sense to always use the best technology for the job at hand. Processors are best suited to general-purpose processing and high-speed sequential processing (as are DSPs), while FPGAs excel at high-speed parallel processing. The general-purpose capability of the cpu enables it to perform system management very well, and allows it to be used to control the content of the FPGAs contained in the system. This symbiotic relationship between cpus and FPGAs also means that the FPGA can off-load computationally intensive algorithms from the cpu, allowing the processor to spend more time working on general-purpose tasks such as data analysis, and more time communicating with a printer or other equipment. Conclusion These new chips called chameleon chips are able to rewire themselves on the fly to create the exact hardware needed to run a piece of software at the utmost speed. an example of such kind of a chip is a chameleon chip. his can also be called a â€Å"chip on demand† Reconfigurable computing goes a step beyond programmable chips in the matter of flexibility. It is not only possible but relatively commonplace to â€Å"rewrite† the silicon so that it can perform new functions in a split second. Reconfigurable chips are simply the extreme end of programmability. † Highly flexible processors that can be reconfigured remotely in the field, Chameleon's chips are des igned to simplify communication system design while delivering increased price/performance numbers. The chameleon chip is a high bandwidth reconfigurable communications processor (RCP). it aims at changing a system's design from a remote location. this will mean more versatile handhelds. Its applications are in, data-intensive Internet,DSP,wireless basestations, voice compression, software-defined radio, high-performance embedded telecom and datacom applications, xDSL concentrators,fixed wireless local loop, multichannel voice compression, multiprotocol packet and cell processing protocols. Its advantages are that it can create customized communications signal processors ,it has increased performance and channel count, and it can more quickly adapt to new requirements and standards and it has lower development costs and reduce risk. A FUTURISTIC DREAM One day, someone will make a chip that does everything for the ultimate consumer device. The chip will be smart enough to be the brains of a cell phone that can transmit or receive calls anywhere in the world. If the reception is poor, the phone will automatically adjust so that the quality improves. At the same time, the device will also serve as a handheld organizer and a player for music, videos, or games. Unfortunately, that chip doesn't exist today. It would require †¢ flexibility †¢ high performance †¢ low power †¢ and low cost But we might be getting closer. Now a new kind of chip may reshape the semiconductor landscape. The chip adapts to any programming task by effectively erasing its hardware design and regenerating new hardware that is perfectly suited to run the software at hand. These chips, referred to as reconfigurable processors, could tilt the balance of power that has preserved a decade-long standoff between programmable chips and hard-wired custom chips. These new chips are able to rewire themselves on the fly to create the exact hardware needed to run a piece of software at the utmost speed. an example of such kind of a chip is a chameleon chip. this can also be called a â€Å"chip on demand† â€Å"Reconfigurable computing goes a step beyond programmable chips in the matter of flexibility. It is not only possible but relatively commonplace to â€Å"rewrite† the silicon so that it can perform new functions in a split second. Reconfigurable chips are simply the extreme end of programmability. † If these adaptable chips can reach a cost-performance parity with hard-wired chips, customers will chuck the static hard-wired solutions. And if silicon can indeed become dynamic, then so will the gadgets of the information age. No longer will you have to buy a camera and a tape recorder. You could just buy one gadget, and then download a new function for it when you want to take some pictures or make a recording. Just think of the possibilities for the fickle consumer. Programmable logic chips, which are arrays of memory cells that can be programmed to perform hardware functions using software tools, are more flexible than DSP chips but slower and more expensive For consumers, this means that the day isn't far away when a cell phone can be used to talk, transmit video images, connect to the Internet, maintain a calendar, and serve as entertainment during travel delays — without the need to plug in adapter hardware REFERENCES BOOKS Wei Qin Presentation , Oct 2000 (The part of the presentation regarding CS2000 is covered in this page) †¢ IEEE conference on Tele-communication, 2001. WEBSITES †¢ www. chameleon systems. com †¢ www. thinkdigit. com †¢ www. ieee. org †¢ www. entecollege. com †¢ www. iec. org †¢ www. quicksilver technologies. com †¢ www. xilinx. com ABSTRACT Chameleon chips are chips whose circuitry can be tailored specifically for the p roblem at hand. Chameleon chips would be an extension of what can already be done with field-programmable gate arrays (FPGAS). An FPGA is covered with a grid of wires. At each crossover, there's a switch that can be semipermanently opened or closed by sending it a special signal. Usually the chip must first be inserted in a little box that sends the programming signals. But now, labs in Europe, Japan, and the U. S. are developing techniques to rewire FPGA-like chips anytime–and even software that can map out circuitry that's optimized for specific problems. The chips still won't change colors. But they may well color the way we use computers in years to come. It is a fusion between custom integrated circuits and programmable logic. n the case when we are doing highly performance oriented tasks custom chips that do one or two things spectacularly rather than lot of things averagely is used. Now using field programmed chips we have chips that can be rewired in an instant. Thus the benefits of customization can be brought to the mass market. CONTENTS ? INTRODUCTION ? CHAMELEON CHIPS ? ADVANTAGES AND APPLICATION ? FPGA ? CS2112 ? RECONFIGURING T HE ARCHITECTURE ? RECONFIGURABLE PROCESSORS ? RECONFIGURABLE COMPUTING ? RECONFIGURABLE HARDWARE ? ADVANTAGES OF RECONFIGURABILITY ? CONCLUSION [pic]

Wednesday, October 23, 2019

Development from conception to age 16 years Essay

E1. 0-3 – Social and Emotional. Babies around the age of 0-3 will learn how to make eye contact, smile and laugh at others, this will get adults attention and start to form good bonds between the baby and mother. Within social development children learn to make friends and understand the importance of social development skills which will help them success in their personal and professional lives. Babies start to socialise and from bond attachments with people who they normally see the most such as their parents and other family members. Children start to understand all different kinds of social skills. For example, babies and young toddlers will learn to share and take turns during activities and normal everyday routines. Babies need a lot of stimulation in order for their brain to develop and to make opportunities to physically use their body. As babies gradually get older and get to the age of 2, you will realise that they start to change and feel a lot more emotions in themselves, such as temper tantrums. 0-3 – Language and Communication. Babies around the age of 0-3 will experience how to communicate well and understand how communication works. They will start to recognise people’s voices such as their parents and other family members. By being able to recognise their voices will help babies realise who they are and who they should turn to. As babies start growing up they can understand different words and sounds that come from their parents in order to start saying things themselves, such as ‘mama’ or ‘dada’. You will find that babies often talk to themselves but as a parent it may be difficult to try understand what they are saying or trying to say. E2. 3-7 – Social and Emotional. Children at the age of 3-7 will have much more of an understanding of their social and emotional development than when they were a baby. Children react differently and will have gained a lot more understanding of what social development is all about. For example children at this age will know a lot more about sharing and taking turns during activities. For example they will realise that sharing and taking turns is important as they will begin school and there will be many more children in which they will be involved in. Most children at this age enjoy playing and working with others but the very few may like to work and play on their own. Socialising is how children learn to relate to other people and follow what is normal in their society. E.g. Manners and toilet training. Children at this age range can have many mixtures of emotions. This aspect helps children how to learn to express their feelings and how to control and manage them. 3-7 – Language and Communication. During the age of 3-4 children are able to use language well and fairly grammatically although there will be some speech immaturity. Children at this age are able to form good sentences and start to ask question such as ‘why?’ and are able to understand what kind of answers adults feedback to them. At the age of 5-7 children are more likely to understand how to do things on their own. For example they can say their own name, how old they are and be able to recognise different information about themselves. At this age children will also have a good interest in reading and writing. This is important for children as it helps them benefit a lot with their language and communication. They are able to recognise and understand bigger words which they won’t have heard before. (Meggitt C (2006) Page what†¦ E3. Explain two theoretical perspectives relevant to the areas of development. Lev Vygotsky. – Vygotsky believed that children understand language and communication by having good interaction skills between themselves and other people. Vygotsky thought that by the age of 2-3 children should use language to control their behaviour and thoughts. This would explain their feelings by talking out loud. Vygotsky also believed that children develop different communication, expressions and explanation by children playing and interacting with other children either at home or in school. Therefore in schools he said that play was significant for learning and children should help each other through play, this will help children understand the importance of socialising. Children use facial expressions and body language in order to understand what has been said to them. Vygotsky suggested that thought and language began as two different activities. When a baby babbles the baby is not using babbling as a way of thinking, therefore the baby is learning to talk. Jerome Bruner. – Bruner believed that all children learn by having to make their own choices and having the change to have different opportunities in able for them to learn. Independence comes into this theory as independence is a massive impact on children as they should learn to do things for themselves instead of asking an adult. Bruner believed that children learn through different activities such as reading, writing and drawing. He felt that adults should guide and support children during activities like these so he or she could reach their potential. Adults guiding and supporting children is called â€Å"scaffolding†, which helps children to develop their knowledge and understanding. E4. Include three observations as appendices. E5. Written Narrative Observation – Narrative Observations is a lot of detailed information about what the child is doing and what you see. Time Sampling Observation – Observing what happens in a short period of time. Tick List Observation – A list of things an observer looks at when observing children. E6/C1. When you work in childcare settings you are always working with young children, their families and other professionals. You should know that confidentiality is a massive impact when working in childcare settings. Confidential information concerning children or their families should never be discussed with anyone, or written down anywhere as confidentiality is the right of every child and parent whether the information is spoken, written down or on a computer. When working on observations it is also important that you maintain confidentiality. When observing children it is important that you write down all correct information about the child and not write anything that is unnecessary. After observing children you should make sure that all information on observations should be stored away properly which means in a safe and secure place. This is so nobody is able to see what has been written down about the particular child except the person who is responsible for the child, for example the child’s name. It is also important that the name of the setting should stay confidentiality as it could be passed on to people who it may not concern. D1/D2. The observations I carried out showed that child A was confident as she showed she could play alongside her friends, by sharing and taking turns within playing with the babies. Child A was acting out different roles such as mum, dad, brothers and sisters and dressing up. Child A showed that she was being independent by different equipment herself which she needed. For example, she decided she wanted to feed her baby therefore she got out the feeding equipment herself and fed her baby independently. Child A showed that she was particularly interested in playing in the home corner as she stuck to this for a long period of time and didn’t change to a different activity. She showed love and affection to the baby treating it as a real human and looked after her. As child A was playing in the home corner she made sure she was including each of her friends by letting them join in with her and playing nicely. By playing in the home corner it supports children’s needs by helping them with their gross and fine motor skills, such as children will try out new thing containing gross and fine motor skills. B1. When you are working on observations it is important that you plan everything before you start the observation in order for you to look back on the planning and know what you are able to do, and follow everything when it comes to doing them. Talk about working alongside with other parents and professionals†¦ Make sure you are doing the correct observations†¦ Knowing if the observation has gone good or bad? Evaluate and reflect on them†¦ A. There are 4 key components of attachment which are Safe Haven, Secure Base, Proximity Maintenance and Separation Distress. John Bowlby used the word attachment so children could experience bonding with more than one person. He was one of the first people to recognise the needs of babies and young children and a strong relationship with their careers. Attachment is about parents being available to meet their child’s needs and being aware of security within their children. He said that bonds which are formed at a young age have a huge impact on children throughout their lives. Babies and young children who do not have bonds or find it difficult creating bond with other people may find it hard to form relationships in their later life, but he suggested that is was important for babies and young children to have some form of attachment or bond with their mother Mary Ainsworth also looked at attachment working alongside with John Bowlby. She is also a theorist who also looks at attachment in young children. Mary Ainsworth looked at how babies reacted when they were left with a complete stranger then being back with their parents again. This links in with behaviour attachment.