Friday, November 29, 2019

Computers Invention Of The Century Essays - Vacuum Tube Computers

Computers: Invention of the Century The History of Computers Only once in a lifetime will a new invention come about to touch every aspect of our lives. Such devices changed the way we manage, work, and live. A machine that has done all this and more now exists in nearly every business in the United States. This incredible invention is the computer. The electronic computer has been around for over a half-century, but its ancestors have been around for 2000 years. However, only in the last 40 years has the computer changed American management to it's greatest extent. From the first wooden abacus to the latest high-speed microprocessor, the computer has changed nearly every aspect of management, and our lives for the better. The very earliest existence of the modern day computer's ancestor is the abacus. These date back to almost 2000 years ago (Dolotta, 1985). It is simply a wooden rack holding parallel wires on which beads are strung. When these beads are moved along the wire according to programming rules that the use r must memorize. All ordinary arithmetic operations can be performed on the abacus. This was one of the first management tools used. The next innovation in computers took place in 1694 when Blaise Pascal invented the first digital calculating machine. It could only add numbers and they had to be entered by turning dials. It was designed to help Pascal's father, who was a tax collector, manage the town's taxes (Beer, 1966). In the early 1800s, a mathematics professor named Charles Babbage designed an automatic calculation machine (Dolotta, 1985). It was steam powered and could store up to 1000 50-digit numbers. Built in to his machine were operations that included everything a modern general-purpose computer would need. It was programmed by and stored data on cards with holes punched in them, appropriately called punch cards. This machine was extremely useful to managers that delt with large volumes of good. With Babbage's machine, managers could more easily calculate the large numbe rs accumulated by inventories. The only problem was that there was only one of these machines built, thus making it difficult for all managers to use (Beer, 1966). After Babbage, people began to lose interest in computers. However, between 1850 and 1900 there were great advances in mathematics and physics that began to rekindle the interest. Many of these new advances involved complex calculations and formulas that were very time consuming for human calculation. The first major use for a computer in the U.S. was during the 1890 census. Two men, Herman Hollerith and James Powers, developed a new punched-card system that could automatically read information on cards without human (Dolotta, 1985). Since the population of the U.S. was increasing so fast, the computer was an essential tool for managers in tabulating the totals (Hazewindus,1988). These advantages were noted by commercial industries and soon led to the development of improved punch-card business-machine systems by Internat ional Business Machines, Remington-Rand, Burroughs, and other corporations (Chposky, 1988). By modern standards the punched-card machines were slow, typically processing from 50 to 250 cards per minute, with each card holding up to 80 digits. At the time, however, punched cards were an enormous step forward; they provided a means of input, output, and memory storage on a massive scale. For more than 50 years following their first use, punched-card machines did the bulk of the world's business computing (Jacobs, 1975). By the late 1930s punched-card machine techniques had become so well established and reliable that Howard Hathaway Aiken, in collaboration with engineers at IBM, undertook construction of a large automatic digital computer based on standard IBM electromechanical parts (Chposky, 1988). Aiken's machine, called the Harvard Mark I, handled 23-digit numbers and could perform all four arithmetic operations (Dolotta, 1985). Also, it had special built-in programs to handled lo garithms and trigonometric functions. The Mark I was controlled from prepunched paper tape. Output was by card punch and electric typewriter. It was slow, requiring 3 to 5 seconds for a multiplication, but it was fully automatic and could complete long computations without human intervention. The outbreak of World War II produced a desperate need for computing capability, especially for the military (Dolotta, 1985). New weapons systems

Monday, November 25, 2019

Free Essays on Swords

generally are worn in a scabbard ,a leather or metal sheath, belted to or hung at the side. A type of a sword is a wea... Free Essays on Swords Free Essays on Swords The sword is a weapon consisting of a long, sharp-edged or pointed blade fixed in a hilt which is a handle that usually has a protective guard at the place where the handle joins the blade. In a general sense, the term connotes any side arm for cutting or thrusting, such as a rapier, saber, à ©pà ©e, scimitar, cutlass, or claymore. Swords used in the most ancient times were made of stone, bone, or wood. Bronze swords, which were probably known to the Egyptians as early as 2000 bc, were the first metal swords. Harder iron swords, appearing at later times in different parts of the world, quickly proved superior; these remained in use until fairly recent times, when steelmaking was perfected and steel blades appeared. The requirements and methods of modern warfare have made swords obsolete as combat weapons. The sword has always been a personal weapon, effective only in hand-to-hand combat such as it was associated with individual distinction. Although swords of political and military leaders, nobles, and exceptional warriors frequently were ornamented, with hilts elegantly decorated ,and sometimes bejeweled and blades inlaid with gold and silver or forged so as to produce a watered effect after the damascene fashion. Symbolic importance also was often attached to the sword. In mythology and literature swords possessing supernatural qualities abound. These belonged to or were acquired by heroes and superior warriors. Oaths of honor or fealty commonly were taken on the sword, and sovereigns still confer knighthood by tapping the shoulder with a sword. To surrender one's sword has always been a token of defeat or submission, and the breaking of it a ceremony of degradation. In the U.S. Army the sword has been abolished, and a single form of saber is worn by officers on ceremonial occasions. When not drawn for use, swords generally are worn in a scabbard ,a leather or metal sheath, belted to or hung at the side. A type of a sword is a wea...

Thursday, November 21, 2019

Marbury v. Madison Essay Example | Topics and Well Written Essays - 250 words

Marbury v. Madison - Essay Example The President issued the slots and the senate approved them (Smith, 1996, p.524). Some appointments, however, were termed void. The legislation was later amended, and Jefferson later eliminated some commissions, including Marbury, and re-assigned some slots to the Democratic-Republican members. Marbury filed a petition in the Supreme Court for the writ of mandamus. This led to the issues as to whether Marbury had a right to the commission, whether the law awards him a remedy, whether the Supreme Court had the original jurisdiction to issue writs of mandamus, whether the Supreme Court had the mandate to review acts of Congress and thereby determine whether they were unconstitutional and whether the Congress could increase the Supreme Court’s mandate as provided for under Article III of the Constitution. The court held that Marbury bore a right to the commission and had a remedy. It was further provided that the Supreme Court bore the mandate to review and determine whether acts of Congress were unconstitutional, that Congress had no mandate to expand the Supreme Courts original jurisdiction beyond what is provided for under the Constitution’s Article III and that the Supreme Court lacks the original jurisdiction to issue writs of mandamus. This decision resulted in instituting of the model of Judicial Review, which is the judiciary’s ability to assert a law as unconstitutional. The case facilitated the principle of checks and balances within the government. It was, therefore, a win for the Democrat-Republicans as Marbury failed to attain the position of Justice of the

Wednesday, November 20, 2019

Information Security Essay Example | Topics and Well Written Essays - 1000 words

Information Security - Essay Example Thus, saving it is as significant as caring their currency and other physical resource as well as necessitates just as a great deal of protection and arrangement (Motorolla, 2010). As information has turned out to be the most valuable resource for any business and extremely important for business in such scenario where an organization needs to take a great care of such resources. So to serve this purpose there is need for establishing and maintaining better security policy inside the corporation that offers enhanced awareness as well as insight into the corporate processes and departments. This paper presents some of the prime aspects of security and its awareness. The aim of this paper is to analyze the ideas which have been presented by Bruce Schneier. According to (Schneier, 2008), the security is a sense, which is not based on the likelihood as well as mathematical computations however on our mental responses to both dangers and countermeasures. In this scenario, we might experie nce horribly frightened of terror campaign, or else we might believe similar to it is not something value anxious about. Thus, the understanding and actuality of security is surely connected to one another: however they are surely not the similar as each other. We would certainly be more affluent if we had two diverse terms for them. Moreover, Schneier (2008) has tried to discover the sense of security (Schneier, 2008). The techniques that will most successfully reduce the capability of hackers as well as intruders to damage and negotiate information security are requiring wide-ranging user teaching and learning. Additionally, endorsing strategies as well as measures only would not be sufficient. Moreover, still by means of lapse the strategies as well as procedures cannot be efficient (iWAR, 2010). In this regard, a business security management team cannot simply offer the type of general corporate consciousness essential to keep away the large multiplicity of events a business mig ht experience. That type of responsiveness necessitates the dynamic contribution of every staff member in the corporation. Additionally, incidents happened through workers faults produces extreme harm to the company each year than outside attacks. In this scenario, getting the support as well as contribution of an organization’s workers necessitates an energetic knowledge program; one that is up-held through the entire layers of management (Olzak, 2006). Producing an information security and privacy knowledge as well as teaching program are not easy jobs to do. They are frequently annoying jobs. Additionally, a lot of times, unluckily, it is regularly a difficult job. Though, offering their employees with the security and privacy information they require making sure they recognize and pursue the necessities, are a significant part of an organization's dealing accomplishment. If the employees of an organization are not familiar with or recognize how to uphold privacy of data a nd information, or how to protect it properly, they not simply danger having one of their most precious business resources (information) mismanagement, acquired by illegal persons, unsuitably utilized but as well threat being in disobedience of a large number of rules and policies that necessitate certain kinds of data and information security as well as privacy knowledge and teaching procedures. Moreover, they also risk spoiling another precious resource or asset, business status. Thus, information privacy and security teaching is significant for a lot of causes (Herold, 2010). Schneier (2008) outlined four main features of the information secu

Monday, November 18, 2019

Problem set Assignment Example | Topics and Well Written Essays - 1250 words

Problem set - Assignment Example By co-operative principle, this answer was not informative and was clearly out of sorts with the person who had asked it. It cannot be necessary that cats rule the world and even the subject asked how long it would take the process to be complete. Actually it is supposed to be a unified issue where answers are effectively given. Word order in linguistics is used to make a reference to studying the syntactic constituents that make up a language. Under many circumstances, correlations between different words do occur. Basic orders of words can be defined by use of the finite verb (V), object and the subject (SVO). The normal transitive sentence has got six possible theoretical word orders. SVO is however basic to all languages of the world which is a basic issue of concern in this discussion. There is however preference to the Chinese and English language. The aim of this section is to make a comparative and contrasting view of the word order. Word order in the Chinese language is as important as it is in the English language. From the comparative basis, there is a sentence constituent that follows the SVO order. This however does not bring the implication that the English and Chinese word order is all the same. To start with, in statement, the structures of these sentences are the same. The subject precedes the verb and the object comes later. This can be referred to as the SVO order. This is just the normal word order in the systems of languages. Take for example the sentence â€Å"I learn Mandarin†. In Chinese, it takes the same order of arrangement 我å ­ ¦Ã¤ ¸ ­Ã¦â€"‡, where我=I, Ã¥ ­ ¦=learn, ä ¸ ­Ã¦â€"‡=Mandarin. This is exactly the same order of arrangement of words. The arrangement above simply indicates that there is no problem interpreting the language as far as the order is concerned. There is however a slight difference between verb inflection between English and Chinese. In Chinese, ver bs are not inflected. The Chinese language has no past tense,

Saturday, November 16, 2019

Performance Measure of PCA and DCT for Images

Performance Measure of PCA and DCT for Images Generally, in Image Processing the transformation is the basic technique that we apply in order to study the characteristics of the Image under scan. Under this process here we present a method in which we are analyzing the performance of the two methods namely, PCA and DCT. In this thesis we are going to analyze the system by first training the set for particular no. Of images and then analyzing the performance for the two methods by calculating the error in this two methods. This thesis referred and tested the PCA and DCT transformation techniques. PCA is a technique which involves a procedure which mathematically transforms number of probably related parameters into smaller number of parameters whose values dont change called principal components. The primary principal component accounts for much variability in the data, and each succeeding component accounts for much of the remaining variability. Depending on the application field, it is also called the separate Karhunen-Loà ¨ve transform (KLT), the Hotelling transform or proper orthogonal decomposition (POD). DCT expresses a series of finitely many data points in terms of a sum of cosine functions oscillating at different frequencies. Transformations are important to numerous applications in science and engineering, from lossy compression of audio and images (where small high-frequency components can be discarded), to spectral methods for the numerical solution of partial differential equations. CHAPTER 1 INTRODUCTION 1.1 Introduction Over the past few years, several face recognition systems have been proposed based on principal components analysis (PCA) [14, 8, 13, 15, 1, 10, 16, 6]. Although the details vary, these systems can all be described in terms of the same preprocessing and run-time steps. During preprocessing, they register a gallery of m training images to each other and unroll each image into a vector of n pixel values. Next, the mean image for the gallery is subtracted from each  and the resulting centered images are placed in a gallery matrix M. Element [i; j] of M is the ith pixel from the jth image. A covariance matrix W = MMT characterizes the distribution of the m images in Ân. A subset of the Eigenvectors of W are used as the basis vectors for a subspace in which to compare gallery and novel probe images. When sorted by decreasing Eigenvalue, the full set of unit length Eigenvectors represent an orthonormal basis where the first direction corresponds to the direction of maximum variance i n the images, the second the next largest variance, etc. These basis vectors are the Principle Components of the gallery images. Once the Eigenspace is computed, the centered gallery images are projected into this subspace. At run-time, recognition is accomplished by projecting a centered  probe image into the subspace and the nearest gallery image to the probe image is selected as its match. There are many differences in the systems referenced. Some systems assume that the images are registered prior to face recognition [15, 10, 11, 16]; among the rest, a variety of techniques are used to identify facial features and register them to each other. Different systems may use different distance measures when matching probe images to the nearest gallery image. Different systems select different numbers of Eigenvectors (usually those corresponding to the largest k Eigenvalues) in order to compress the data and to improve accuracy by eliminating Eigenvectors corresponding to noise rather than meaningful variation. To help evaluate and compare individual steps of the face recognition process, Moon and Phillips created the FERET face database, and performed initial comparisons of some common distance measures for otherwise identical systems [10, 11, 9]. This paper extends their work, presenting further comparisons of distance measures over the FERET database and examining alternative way of selecting subsets of Eigenvectors. The Principal Component Analysis (PCA) is one of the most successful techniques that have been used in image recognition and compression. PCA is a statistical method under the broad title of factor analysis. The purpose of PCA is to reduce the large dimensionality of the data space (observed variables) to the smaller intrinsic dimensionality of feature space (independent variables), which are needed to describe the data economically. This is the case when there is a strong correlation between observed variables. The jobs which PCA can do are pred iction, redundancy removal, feature extraction, data compression, etc. Because PCA is a classical technique which can do something in the linear domain, applications having linear models are suitable, such as signal processing, image processing, system and control theory, communications, etc. Face recognition has many applicable areas. Moreover, it can be categorized into face identification, face classification, or sex determination. The most useful applications contain crowd surveillance, video content indexing, personal identification (ex. drivers license), mug shots matching, entrance security, etc. The main idea of using PCA for face recognition is to express the large 1-D vector of pixels constructed from 2-D facial image into the compact principal components of the feature space. This can be called eigen space projection. Eigen space is calculated by identifying the eigenvectors of the covariance matrix derived from a set of facial images(vectors). The details are described i n the following section. PCA computes the basis of a space which is represented by its training vectors. These basis vectors, actually eigenvectors, computed by PCA are in the direction of the largest variance of the training vectors. As it has been said earlier, we call them eigenfaces. Each eigenface can be viewed a feature. When a particular face is projected onto the face space, its vector into the face space describe the importance of each of those features in the face. The face is expressed in the face space by its eigenface coefficients (or weights). We can handle a large input vector, facial image, only by taking its small weight vector in the face space. This means that we can reconstruct the original face with some error, since the dimensionality of the image space is much larger than that of face space. A face recognition system using the Principal Component Analysis (PCA) algorithm. Automatic face recognition systems try to find the identity of a given face image according to their memory. The memory of a face recognizer is generally simulated by a training set. In this project, our training set consists of the features extracted from known face images of different persons. Thus, the task of the face recognizer is to find the most similar feature vector among the training set to the feature vector of a given test image. Here, we want to recognize the identity of a person where an image of that person (test image) is given to the system. You will use PCA as a feature extraction algorithm in this project. In the training phase, you should extract feature vectors for each image in the training set. Let  ­A be a training image of person A which has a pixel resolution of M  £ N (M rows, N columns). In order to extract PCA features of  ­A, you will first convert the image into a pixel vector à A by concatenating each of the M rows into a single vector. The length (or, dimensionality) of the vector à A will be M  £N. In this project, you will use the PCA algorithm as a dimensionality reduction technique which transforms the vector à A to a vector !A which has a imensionality d where d  ¿ M  £ N. For each training image  ­i, you should calculate and store these feature vectors !i. In the recognition phase (or, testing phase), you will be given a test image  ­j of a known person. Let  ®j be the identity (name) of this person. As in the training phase, you should compute the feature vector of this person using PCA and obtain !j . In order to identify  ­j , you should compute the similarities between !j and all of the feature vectors !is in the training set. The similarity between feature vectors can be computed using Euclidean distance. The identity of the most similar !i will be the output of our face recogn izer. If i = j, it means that we have correctly identified the person j, otherwise if i 6= j, it means that we have misclassified the person j. 1.2 Thesis structure: This thesis work is divided into five chapters as follows. Chapter 1: Introduction This introductory chapter is briefly explains the procedure of transformation in the Face Recognition and its applications. And here we explained the scope of this research. And finally it gives the structure of the thesis for friendly usage. Chapter 2: Basis of Transformation Techniques. This chapter gives an introduction to the Transformation techniques. In this chapter we have introduced two transformation techniques for which we are going to perform the analysis and result are used for face recognition purpose Chapter 3: Discrete Cosine Transformation In this chapter we have continued the part from chapter 2 about transformations. In this other method ie., DCT is introduced and analysis is done Chapter 4: Implementation and results This chapter presents the simulated results of the face recognition analysis using MATLAB. And it gives the explanation for each and every step of the design of face recognition analysis and it gives the tested results of the transformation algorithms. Chapter 5: Conclusion and Future work This is the final chapter in this thesis. Here, we conclude our research and discussed about the achieved results of this research work and suggested future work for this research. CHAPTER 2 BASICs of Image Transform Techniques 2.1 Introduction: Now a days Image Processing has been gained so much of importance that in every field of science we apply image processing for the purpose of security as well as increasing demand for it. Here we apply two different transformation techniques in order study the performance which will be helpful in the detection purpose. The computation of the performance of the image given for testing is performed in two steps: PCA (Principal Component Analysis) DCT (Discrete Cosine Transform) 2.2 Principal Component Analysis: PCA is a technique which involves a procedure which mathematically transforms number of possibly correlated variables into smaller number of uncorrelated variables called principal components. The first principal component accounts for much variability in the data, and each succeeding component accounts for much of the remaining variability. Depending on the application field, it is also called the discrete Karhunen-Loà ¨ve transform (KLT), the Hotelling transform or proper orthogonal decomposition (POD). Now PCA is mostly used as a tool in exploration of data analysis and for making prognostic models. PCA also involves calculation for the Eigen value decomposition of a data covariance matrix or singular value decomposition of a data matrix, usually after mean centring the data from each attribute. The results of this analysis technique are usually shown in terms of component scores and also as loadings. PCA is real Eigen based multivariate analysis. Its action can be termed in terms of as edifying the inner arrangement of the data in a shape which give details of the mean and variance in the data. If there is any multivariate data then its visualized as a set if coordinates in a multi dimensional data space, this algorithm allows the users having pictures with a lower aspect reveal a shadow of object in view from a higher aspect view which reveals the true informative nature of the object. PCA is very closely related to aspect analysis, some statistical software packages purposely conflict the two techniques. True aspect analysis makes different assumptions about the original configuration and then solves eigenvectors of a little different medium. 2.2.1 PCA Implementation: PCA is mathematically defined as an orthogonal linear transformation technique that transforms data to a new coordinate system, such that the greatest variance from any projection of data comes to lie on the first coordinate, the second greatest variance on the second coordinate, and so on. PCA is theoretically the optimum transform technique for given data in least square terms. For a data matrix, XT, with zero empirical mean ie., the empirical mean of the distribution has been subtracted from the data set, where each row represents a different repetition of the experiment, and each column gives the results from a particular probe, the PCA transformation is given by: Where the matrix ÃŽÂ £ is an m-by-n diagonal matrix, where diagonal elements ae non-negative and W  ÃƒÅ½Ã‚ £Ãƒâ€šÃ‚  VT is the singular value decomposition of  X. Given a set of points in Euclidean space, the first principal component part corresponds to the line that passes through the mean and minimizes the sum of squared errors with those points. The second principal component corresponds to the same part after all the correlation terms with the first principal component has been subtracted from the points. Each Eigen value indicates the part of the variance ie., correlated with each eigenvector. Thus, the sum of all the Eigen values is equal to the sum of squared distance of the points with their mean divided by the number of dimensions. PCA rotates the set of points around its mean in order to align it with the first few principal components. This moves as much of the variance as possible into the first few dimensions. The values in the remaining dimensions tend to be very highly correlated and may be dropped with minimal loss of information. PCA is used for dimensionality reduction. PCA is optimal linear transformation technique for keep ing the subspace which has largest variance. This advantage comes with the price of greater computational requirement. In discrete cosine transform, Non-linear dimensionality reduction techniques tend to be more computationally demanding in comparison with PCA. Mean subtraction is necessary in performing PCA to ensure that the first principal component describes the direction of maximum variance. If mean subtraction is not performed, the first principal component will instead correspond to the mean of the data. A mean of zero is needed for finding a basis that minimizes the mean square error of the approximation of the data. Assuming zero empirical mean (the empirical mean of the distribution has been subtracted from the data set), the principal component w1 of a data set x can be defined as: With the first k  Ãƒ ¢Ã‹â€ Ã¢â‚¬â„¢Ãƒâ€šÃ‚  1 component, the kth component can be found by subtracting the first k à ¢Ã‹â€ Ã¢â‚¬â„¢ 1 principal components from x: and by substituting this as the new data set to find a principal component in The other transform is therefore equivalent to finding the singular value decomposition of the data matrix X, and then obtaining the space data matrix Y by projecting X down into the reduced space defined by only the first L singular vectors, WL: The matrix W of singular vectors of X is equivalently the matrix W of eigenvectors of the matrix of observed covariances C = X XT, The eigenvectors with the highest eigen values correspond to the dimensions that have the strongest correlation in the data set (see Rayleigh quotient). PCA is equivalent to empirical orthogonal functions (EOF), a name which is used in meteorology. An auto-encoder neural network with a linear hidden layer is similar to PCA. Upon convergence, the weight vectors of the K neurons in the hidden layer will form a basis for the space spanned by the first K principal components. Unlike PCA, this technique will not necessarily produce orthogonal vectors. PCA is a popular primary technique in pattern recognition. But its not optimized for class separability. An alternative is the linear discriminant analysis, which does take this into account. 2.2.2 PCA Properties and Limitations PCA is theoretically the optimal linear scheme, in terms of least mean square error, for compressing a set of high dimensional vectors into a set of lower dimensional vectors and then reconstructing the original set. It is a non-parametric analysis and the answer is unique and independent of any hypothesis about data probability distribution. However, the latter two properties are regarded as weakness as well as strength, in that being non-parametric, no prior knowledge can be incorporated and that PCA compressions often incur loss of information. The applicability of PCA is limited by the assumptions[5] made in its derivation. These assumptions are: We assumed the observed data set to be linear combinations of certain basis. Non-linear methods such as kernel PCA have been developed without assuming linearity. PCA uses the eigenvectors of the covariance matrix and it only finds the independent axes of the data under the Gaussian assumption. For non-Gaussian or multi-modal Gaussian data, PCA simply de-correlates the axes. When PCA is used for clustering, its main limitation is that it does not account for class separability since it makes no use of the class label of the feature vector. There is no guarantee that the directions of maximum variance will contain good features for discrimination. PCA simply performs a coordinate rotation that aligns the transformed axes with the directions of maximum variance. It is only when we believe that the observed data has a high signal-to-noise ratio that the principal components with larger variance correspond to interesting dynamics and lower ones correspond to noise. 2.2.3 Computing PCA with covariance method Following is a detailed description of PCA using the covariance method . The goal is to transform a given data set X of dimension M to an alternative data set Y of smaller dimension L. Equivalently; we are seeking to find the matrix Y, where Y is the KLT of matrix X: Organize the data set Suppose you have data comprising a set of observations of M variables, and you want to reduce the data so that each observation can be described with only L variables, L Write as column vectors, each of which has M rows. Place the column vectors into a single matrix X of dimensions M ÃÆ'- N. Calculate the empirical mean Find the empirical mean along each dimension m = 1,  ,  M. Place the calculated mean values into an empirical mean vector u of dimensions M ÃÆ'- 1. Calculate the deviations from the mean Mean subtraction is an integral part of the solution towards finding a principal component basis that minimizes the mean square error of approximating the data. Hence we proceed by centering the data as follows: Subtract the empirical mean vector u from each column of the data matrix X. Store mean-subtracted data in the M ÃÆ'- N matrix B. where h is a 1  ÃƒÆ'-  N row vector of all  1s: Find the covariance matrix Find the M ÃÆ'- M empirical covariance matrix C from the outer product of matrix B with itself: where is the expected value operator, is the outer product operator, and is the conjugate transpose operator. Please note that the information in this section is indeed a bit fuzzy. Outer products apply to vectors, for tensor cases we should apply tensor products, but the covariance matrix in PCA, is a sum of outer products between its sample vectors, indeed it could be represented as B.B*. See the covariance matrix sections on the discussion page for more information. Find the eigenvectors and eigenvalues of the covariance matrix Compute the matrix V of eigenvectors which diagonalizes the covariance matrix C: where D is the diagonal matrix of eigenvalues of C. This step will typically involve the use of a computer-based algorithm for computing eigenvectors and eigenvalues. These algorithms are readily available as sub-components of most matrix algebra systems, such as MATLAB[7][8], Mathematica[9], SciPy, IDL(Interactive Data Language), or GNU Octave as well as OpenCV. Matrix D will take the form of an M ÃÆ'- M diagonal matrix, where is the mth eigenvalue of the covariance matrix C, and Matrix V, also of dimension M ÃÆ'- M, contains M column vectors, each of length M, which represent the M eigenvectors of the covariance matrix C. The eigenvalues and eigenvectors are ordered and paired. The mth eigenvalue corresponds to the mth eigenvector. Rearrange the eigenvectors and eigenvalues Sort the columns of the eigenvector matrix V and eigenvalue matrix D in order of decreasing eigenvalue. Make sure to maintain the correct pairings between the columns in each matrix. Compute the cumulative energy content for each eigenvector The eigenvalues represent the distribution of the source datas energy among each of the eigenvectors, where the eigenvectors form a basis for the data. The cumulative energy content g for the mth eigenvector is the sum of the energy content across all of the eigenvalues from 1 through m: Select a subset of the eigenvectors as basis vectors Save the first L columns of V as the M ÃÆ'- L matrix W: where Use the vector g as a guide in choosing an appropriate value for L. The goal is to choose a value of L as small as possible while achieving a reasonably high value of g on a percentage basis. For example, you may want to choose L so that the cumulative energy g is above a certain threshold, like 90 percent. In this case, choose the smallest value of L such that Convert the source data to z-scores Create an M ÃÆ'- 1 empirical standard deviation vector s from the square root of each element along the main diagonal of the covariance matrix C: Calculate the M ÃÆ'- N z-score matrix: (divide element-by-element) Note: While this step is useful for various applications as it normalizes the data set with respect to its variance, it is not integral part of PCA/KLT! Project the z-scores of the data onto the new basis The projected vectors are the columns of the matrix W* is the conjugate transpose of the eigenvector matrix. The columns of matrix Y represent the Karhunen-Loeve transforms (KLT) of the data vectors in the columns of matrix  X. 2.2.4 PCA Derivation Let X be a d-dimensional random vector expressed as column vector. Without loss of generality, assume X has zero mean. We want to find a Orthonormal transformation matrix P such that with the constraint that is a diagonal matrix and By substitution, and matrix algebra, we obtain: We now have: Rewrite P as d column vectors, so and as: Substituting into equation above, we obtain: Notice that in , Pi is an eigenvector of the covariance matrix of X. Therefore, by finding the eigenvectors of the covariance matrix of X, we find a projection matrix P that satisfies the original constraints. CHAPTER 3 DISCRETE Cosine transform 3.1 Introduction: A discrete cosine transform (DCT) expresses a sequence of finitely many data points in terms of a sum of cosine functions oscillating at different frequencies. DCTs are important to numerous applications in engineering, from lossy compression of audio and images, to spectral methods for the numerical solution of partial differential equations. The use of cosine rather than sine functions is critical in these applications: for compression, it turns out that cosine functions are much more efficient, whereas for differential equations the cosines express a particular choice of boundary conditions. In particular, a DCT is a Fourier-related transform similar to the discrete Fourier transform (DFT), but using only real numbers. DCTs are equivalent to DFTs of roughly twice the length, operating on real data with even symmetry (since the Fourier transform of a real and even function is real and even), where in some variants the input and/or output data are shifted by half a sample. There are eight standard DCT variants, of which four are common. The most common variant of discrete cosine transform is the type-II DCT, which is often called simply the DCT; its inverse, the type-III DCT, is correspondingly often called simply the inverse DCT or the IDCT. Two related transforms are the discrete sine transforms (DST), which is equivalent to a DFT of real and odd functions, and the modified discrete cosine transforms (MDCT), which is based on a DCT of overlapping data. 3.2 DCT forms: Formally, the discrete cosine transform is a linear, invertible function F  : RN -> RN, or equivalently an invertible N ÃÆ'- N square matrix. There are several variants of the DCT with slightly modified definitions. The N real numbers x0, , xN-1 are transformed into the N real numbers X0, , XN-1 according to one of the formulas: DCT-I Some authors further multiply the x0 and xN-1 terms by à ¢Ã‹â€ Ã… ¡2, and correspondingly multiply the X0 and XN-1 terms by 1/à ¢Ã‹â€ Ã… ¡2. This makes the DCT-I matrix orthogonal, if one further multiplies by an overall scale factor of , but breaks the direct correspondence with a real-even DFT. The DCT-I is exactly equivalent, to a DFT of 2N à ¢Ã‹â€ Ã¢â‚¬â„¢ 2 real numbers with even symmetry. For example, a DCT-I of N=5 real numbers abcde is exactly equivalent to a DFT of eight real numbers abcdedcb, divided by two. Note, however, that the DCT-I is not defined for N less than 2. Thus, the DCT-I corresponds to the boundary conditions: xn is even around n=0 and even around n=N-1; similarly for Xk. DCT-II The DCT-II is probably the most commonly used form, and is often simply referred to as the DCT. This transform is exactly equivalent to a DFT of 4N real inputs of even symmetry where the even-indexed elements are zero. That is, it is half of the DFT of the 4N inputs yn, where y2n = 0, y2n + 1 = xn for , and y4N à ¢Ã‹â€ Ã¢â‚¬â„¢ n = yn for 0 Some authors further multiply the X0 term by 1/à ¢Ã‹â€ Ã… ¡2 and multiply the resulting matrix by an overall scale factor of . This makes the DCT-II matrix orthogonal, but breaks the direct correspondence with a real-even DFT of half-shifted input. The DCT-II implies the boundary conditions: xn is even around n=-1/2 and even around n=N-1/2; Xk is even around k=0 and odd around k=N. DCT-III Because it is the inverse of DCT-II (up to a scale factor, see below), this form is sometimes simply referred to as the inverse DCT (IDCT). Some authors further multiply the x0 term by à ¢Ã‹â€ Ã… ¡2 and multiply the resulting matrix by an overall scale factor of , so that the DCT-II and DCT-III are transposes of one another. This makes the DCT-III matrix orthogonal, but breaks the direct correspondence with a real-even DFT of half-shifted output. The DCT-III implies the boundary conditions: xn is even around n=0 and odd around n=N; Xk is even around k=-1/2 and even around k=N-1/2. DCT-IV The DCT-IV matrix becomes orthogonal if one further multiplies by an overall scale factor of . A variant of the DCT-IV, where data from different transforms are overlapped, is called the modified discrete cosine transform (MDCT) (Malvar, 1992). The DCT-IV implies the boundary conditions: xn is even around n=-1/2 and odd around n=N-1/2; similarly for Xk. DCT V-VIII DCT types I-IV are equivalent to real-even DFTs of even order, since the corresponding DFT is of length 2(Nà ¢Ã‹â€ Ã¢â‚¬â„¢1) (for DCT-I) or 4N (for DCT-II/III) or 8N (for DCT-VIII). In principle, there are actually four additional types of discrete cosine transform, corresponding essentially to real-even DFTs of logically odd order, which have factors of N ±Ãƒâ€šÃ‚ ½ in the denominators of the cosine arguments. Equivalently, DCTs of types I-IV imply boundaries that are even/odd around either a data point for both boundaries or halfway between two data points for both boundaries. DCTs of types V-VIII imply boundaries that even/odd around a data point for one boundary and halfway between two data points for the other boundary. However, these variants seem to be rarely used in practice. One reason, perhaps, is that FFT algorithms for odd-length DFTs are generally more complicated than FFT algorithms for even-length DFTs (e.g. the simplest radix-2 algorithms are only for even lengths), and this increased intricacy carries over to the DCTs as described below. Inverse transforms Using the normalization conventions above, the inverse of DCT-I is DCT-I multiplied by 2/(N-1). The inverse of DCT-IV is DCT-IV multiplied by 2/N. The inverse of DCT-II is DCT-III multiplied by 2/N and vice versa. Like for the DFT, the normalization factor in front of these transform definitions is merely a convention and differs between treatments. For example, some authors multiply the transforms by so that the inverse does not require any additional multiplicative factor. Combined with appropriate factors of à ¢Ã‹â€ Ã… ¡2 (see above), this can be used to make the transform matrix orthogonal. Multidimensional DCTs Multidimensional variants of the various DCT types follow straightforwardly from the one-dimensional definitions: they are simply a separable product (equivalently, a composition) of DCTs along each dimension. For example, a two-dimensional DCT-II of an image or a matrix is simply the one-dimensional DCT-II, from above, performed along the rows and then along the columns (or vice versa). That is, the 2d DCT-II is given by the formula (omitting normalization and other scale factors, as above): Two-dimensional DCT frequencies Technically, computing a two- (or multi-) dimensional DCT by sequences of one-dimensional DCTs along each dimension is known as a row-column algorithm. As with multidimensional FFT algorithms, however, there exist other methods to compute the same thing while performing the computations in a different order. The inverse of a multi-dimensional DCT is just a separable product of the inverse(s) of the corresponding one-dimensional DCT(s), e.g. the one-dimensional inverses applied along one dimension at a time in a row-column algorithm. The image to the right shows combination of horizontal and vertical frequencies for an 8 x 8 (N1 = N2 = 8) two-dimensional DCT. Each step from left to right and top to bottom is an increase in frequency by 1/2 cycle. For example, moving right one from the top-left square yields a half-cycle increase in the horizontal frequency. Another move to the right yields two half-cycles. A move down yields two half-cycles horizontally and a half-cycle vertically. The source data (88) is transformed to a linear combination of these 64 frequency squares. Chapter 4 IMPLEMENTATION AND RESULTS 4.1 Introduction: In previous chapters (chapter 2 and chapter 3), we get the theoretical knowledge about the Principal Component Analysis and Discrete Cosine Transform. In our thesis work we have seen the analysis of both transform. To execute these tasks we chosen a platform called MATLAB, stands for matrix laboratory. It is an efficient language for Digital image processing. The image processing toolbox in MATLAB is a collection of different MATAB functions that extend the capability of the MATLAB environment for the solution of digital image processing problems. [13] 4.2 Practical implementation of Performance analysis: As discussed earlier we are going to perform analysis for the two transform methods, to the images as, <

Wednesday, November 13, 2019

Milton and Cavendish: Faithful Realists :: Paradise lost Blazing World

Milton and Cavendish: Faithful Realists Inquiries regarding the nature and acquisition of knowledge, coupled with the monumental question of whether human beings are capable of accruing knowledge–the philosophical study of epistemology–has roots buried in antiquity: Genesis, to be exact. Great thinkers of the Western tradition have both accepted and rejected components of Old Testament lore; Platonic and Aristotelian philosophers have indeed battled for centuries over the way in which reality is understood. Following Aristotle’s teachings, the empiricists and Enlightenment thinkers regarded the processing of sense and experiential data as the surest way to unlock truth. Plato’s adherents, however, figures such as Immanuel Kant, deemed the human intellect a leaky and misguiding faculty, not quite efficient in comprehending truth. John Milton and Margaret Cavendish, the reigning theological epistemologists of the 17th century, pondered the nature of divine reality, the role of human rationality in understanding God’s master plan, and the means by which that plan is (and should be) grasped by the human race. Both Milton and Cavendish have declared in their works, Paradise Lost and The Blazing World, that reason as a means to arrive at ultimate truth is insufficient; in the end, faith is the only tool with which human beings acquire proper knowledge. After an initial reading of The Blazing World, one would assume Cavendish ranked reason above faith, parting ways with Milton; the Empress in the tale is nearly obsessed with scientific inquiry. Upon close analysis of the text, however, it becomes evident that Cavendish’s message is complementary to Milton’s. This is not to say that either Milton or Cavendish were pure theologians in their world view, placing no value on science or logic; rather, both found a measure of importance in the findings of contemporary science and consequently instilled in their literary protagonists curiosity about the laws of the universe. It was just such cosmic curiosity that plagued thinking individuals of the Renaissance period. As Europe slowly developed a flavor for scientific inquiry, well guarded theological dogmas were threatened; the mid 1600s was indeed a time of questioning long established religious and political doctrines. While grappling with the emerging debate of reason versus faith, Milton and Cavendish offered philosophical fictions heralding the supremacy of the latter. Characters in the authors’ works discover that reason, untempered by belief in divine truth, is dangerous. Cavendish’s Empress of the Blazing World, for example, is a tyrannical ruler who demands that her subjects uncover the secrets of the natural world.

Monday, November 11, 2019

The weak are forced to create alternative realities Essay

The brain is a crucible: a melting pot of intersecting ingredients that forges a reality that is deceptively the same, but often vastly different for each individual. That reality is a construct is a fashionable term these days; it means that we tend to see reality from a particular frame of reference. There is always a context, whether it be political, social or cultural. For those who are unable to construct a satisfactory reality, it is then that they are forced to create an alternative reality, perhaps that fulfils their dreams and meets their views and values. In the words of cognitive neuropsychologist Kaspar Meyer, â€Å"what is now clear is that the brain is not a stimulus-driven robot that directly translates the outer world into a conscious experience. What we’re conscious of is what the brain makes us be conscious of, and in the absence of incoming signals, bits of memories tucked away can be enough for a brain to get started with†. Reality for each individual differs according to their past experiences and memories, as well as what they choose to perceive to be true. Those with weaker frames of minds – such as individuals suffering from mental disorders, or solely living under delusion – tend to create alternative realities in order to escape the harsh truth. Consider the materialism of the post-war United States. Motivated by prosperity and wealth, all Americans were expected to achieve the profound ‘American Dream’, of which Arthur Miller critiques throughout his play ‘Death of a Salesman’. The play’s lead character Willy Loman struggles to face the true reality, but instead, chooses to believe he is leading the life he had always dreamt of. Willy believes himself to be the best salesman of his company, claiming he is â€Å"well liked† by all, and â€Å"vital in New England†, when in fact, his true reality proves to be quite the opposite. Willy struggles to pay his mortgage, as well as fails to support and provide for his family. Despite his favourite son Biff finding the words to call him out to be what he truly is – â€Å"(a) fake†¦ (a) big phoney fake† and â€Å"a dime a dozen†, Willy remains ignorant towards the truth. Willy’s alternative reality provides him with the motivation to continue his life, despite the loss of his job and loss of  respect from Biff. Alternative realities provide temporary relief from the harsh truth of reality, which is sometimes necessary for those who are considered mentally weak. It is often easier to support the alternative realities created by the mentally weak. Due to their mental state, disregarding what they believe to be true can carry several consequences. In ‘Death of a Salesman’, Willy’s wife Linda remains supportive throughout her husband’s delusion. He claims she is his â€Å"foundation (and) support†, which is simply conforming to the expected role of a 1950’s housewife. Another example includes the 2010 movie directed by Martin Scrosese titled ‘Shutter Island’, which clearly highlights the importance of accepting the alternative realities created by the mentally weak. The film’s protagonist Teddy Daniels believes himself to be a U.S marshal assigned to investigate the disappearance of a patient from Boston’s Shutter Island mental institution. However, in true fact, Teddy is actually Andrew Laeddis, one of the institution’s most dangerous patients they have because of his delusions and his violence towards the staff and the other patients. Andrew (or Teddy’s) delusion created an alternative reality in which he was able to escape the truth about his murderous past. In order to support his alternative reality, the staff at the institution developed a scenario in which Andrew was able to live out his delusion, therefore preventing the otherwise dangerous psychological effects of his true nature. If Andrew was in fact exposed to his true reality rather than living as his alter ego, he may have not been able to survive, hence proving the importance of supporting a mentally weak individual’s alternative reality. Alternative realities may not always be negative. In these cases, the alternative reality protects the individual from harm or negative attention due exposing their true self. Consider the death of Whitney Houston, or the even more recent Robin Williams. Despite their true reality consisting of depression and substance abuse, these two renowned celebrities developed and maintained an alternative reality to allow others to portray them as role models and successful artists. In the case of Robin Williams, his severe  depression led to his suicide. As a comedian and successful actor, Williams was perceived by the majority to be a motivated happy man. In true fact, despite working to ensure other people were laughing, he was diagnosed with severe depression, to the point where he eventually took his own life. Robin William’s alternative reality forced others to see him as he was not, but without the negative attention of showing who he really was. In Whitney Houston’s case, despite her perception as an iconic successful singer, her alternative reality consisted of a cocaine addiction to the point where she drowned in a hotel bathtub. Following their deaths, the public was finally made aware of who they truly were, regardless of what we had previously perceived them to be. Alternative realities such as these can be crucial to ensure happiness and satisfaction for the individual, without highlighting their true selves to the world. Those who are mentally weak tend to create alternative realities in order to avoid their true selves. Whether they are living within a delusion – such as Willy Loman – or suffering from a mental condition – such as Andrew Laeddis, (otherwise known as Teddy), alternative realities may be beneficial for the individual, however difficult for others to accept. Due to individual differences in realities due to social, emotional, cultural and political factors, each person must construct a reality that is most suitable for their views and values, even if that results in alternative realities being created. In the words of author Mignon McLaughlin, â€Å"a critic can only review the book he has read, not the one in which the author wrote†, and therefore we cannot judge an individual’s choice of reality or alternative realities without experiencing it ourselves first hand.

Saturday, November 9, 2019

Breath, Eyes, Memory by Edwidge Danticat

Breath, Eyes, Memory by Edwidge Danticat The novel breath, eyes, memory is a true manifestation of the medieval and present human society. In simpler terms, it reflects the basic elements that spun our existence. These elements are explained from the main themes of the novel. These themes form the framework of this paper because immigration, love and parenting are discussed as the main themes in the novel.Advertising We will write a custom essay sample on Breath, Eyes, Memory by Edwidge Danticat specifically for you for only $16.05 $11/page Learn More Immigration Immigration is a major theme in the novel breath, eyes, memory because it describes the foundation of the novel’s plot. Moreover, the theme of immigration is almost representative of the current and past American immigration trends. From the novel, a reader is able to see the difference in culture between Sophie and her mother. Sophie was raised in Haiti but her mother lived in New York (Danticat 3). As the novel progresses, we can see that Martine (Sophie’s mother) invites her daughter to the US to stay with her. From this understanding, the theme of immigration is profound. After shifting her residence from Haiti to New York, Sophie discovers missing pieces of her past. In addition, she is able to adjust to the new American lifestyle. Later in the narration, Sophie returns to Haiti to see her grandmother after she develops some resentment towards her mother. Her trip back to Haiti is also another manifestation of the theme of immigration, where she goes back to her native homeland to live with her grandmother and aunts. However, throughout the novel, the differences in culture (between native Haitians and Americans) are exposed, and the concept of assimilation is emphasized to synchronize the two cultures (Danticat 15). Love The theme of love is profound in the novel breath, eyes, memory. Love manifests in the Haitian ritual to check female virginity, where mothers test their daughters to ensure they are still pure. This is an act of love, which manifests in protection. Testing is therefore done to ensure mothers protect their daughters from the social evils of the world. Briefly, this ritual acts as a deterrent for young women to engage in runaway sexual adventures, which may expose them to harm (Danticat 23). Therefore, due to the practice of the ritual, young women observe chastity because they would not want to be condemned if they failed the test. Though the entire experience is traumatizing for Sophie, clearly, the procedure is done out of love.Advertising Looking for essay on american literature? Let's see if we can help you! Get your first paper with 15% OFF Learn More When Sophie moves to America, she finds love with her husband. This episode in the novel’s plot is a fast forward to Sophie’s life after high school (Danticat 31). Sophie becomes obsessed with the man next door and through love; they are able to court and live together . From this love, they bore a daughter. The analysis of love within the above framework can be understood in the context of family love because Sophie and her husband lived together, bound by love. By extension, the theme of love also manifests in the bond that existed among the Caco women. Coupled by a deep sense of history, the theme of love binds the practices, beliefs and values shared by the Caco women (Danticat 31). When Sophie moves back to Haiti, she seeks counsel from these women and consequently, their advice shape her ideals as a woman. The bravery and struggles of the Haitian women are passed down to Sophie through the love they have for her. They also treat her as one of their own because of the love they all share. Parenting A major part of the novel breath, eyes, memory highlights the theme of parenting. In fact, Sophie’s entire experience is understood within the framework of parenthood (Danticat 40). Her trip from Haiti to New York, her experiences as a mothe r, and her trip back to Haiti highlight her quest and experiences in understanding parenthood. Raised without a mother, the theme of parenting manifests in Sophie’s life during the earlier chapters of the novel when Martine (a childless mother) invites Sophie (a motherless child) to live with her in the US. Parenthood is at the center of this invitation because Sophie is curious to learn the history and life of a mother that she never knew. Similarly, Martine is desperate to unite with her daughter. All along, Sophie’s grandmother raised her until she was 12. Everything that she knew before she joined her mother was because of the parental care she received from her grandmother in Haiti. Later sections of the novel revolve around Martine’s parenting skills, which eventually form a rift with her daughter. For instance, the virginity test is a form of parental skill Martine inherited from her past as a Haitian girl. She passes this practice down to her daughter bu t Sophie is not receptive to it.Advertising We will write a custom essay sample on Breath, Eyes, Memory by Edwidge Danticat specifically for you for only $16.05 $11/page Learn More It is from this understanding that a rift is created between Sophie and her mother. This sentiment prompts her trip back to Haiti where she goes to seek her grandmother’s counsel. The entire narration manifests the need for good parenting. Conclusion The themes of immigration, parenting and love feature prominently in breath, eyes, memory because they are used to explain the lives of the main characters. These themes represent real-life situations affecting people in the society, and almost concisely, they summarize the fabric of our social relationships. For instance, love and parenting are core foundations of family life, while family life is the core of the society. Based on this understanding, the themes discussed above are core to the understanding of the novel breath , eyes, memory and a mirror of the society. Danticat, Edwidge. Breath, Eyes, Memory. New York: Vintage Books, 1998. Print.

Wednesday, November 6, 2019

Childs Book of True Crime essays

Child's Book of True Crime essays Fresh from college, Kate Byrne, a 22-year-old, is working in her first job, teaching a fourth grade class in Endport, a small town on the southern coast of Tasmania. Strangely childlike, she is embroiled in a love affair with the father of her most gifted student, Lucien Marne. Thomas Marne is a successful corporate lawyer in Hobart and the first chapter of A Childrens True Book of Crime primarily focuses on his relationship with Kate. Kate struggles out of her black underwear in Thomas' car while he speeds them toward a motel during her school lunch hour. The two appear restless in anticipation of fulfilling their sexual desires. Kate makes coarse references to her surroundings, relating passing boulders to Mouths and tongues, like pornographic things. Thomas begins to lose focus on the road ahead; [his] driving deteriorates, as his concentration shifts to Kates flirtatious motions. When commenting on their surroundings, the attitudes of the two are juxtaposed: Kate marvels at the luxury mansion they pull up to, Its lovely, while Thomas, Yes it is...clearly agitated. As the story develops, it is increasingly obvious that Hooper wants us to see that the relationship between the characters is solely based on sexual attraction. Thomas comment, Im going to rent the bed by the half hour, implies how brief and insignificant these meetings are to him. He appears to be avoiding the commitment of their relationship, constantly reiterating to Kate that the only reason for these meetings is to alleviate boredom: This is just sex, nothing more. Before arranging a reservation for the hotel room, he reminds Kate the affair is ...to be strictly kept away from the sentimental. I am given the impression that his cautioning appears to be directed not to Kate, but rather at himself. ...

Monday, November 4, 2019

Child adolence and development Research Paper Example | Topics and Well Written Essays - 750 words

Child adolence and development - Research Paper Example In my days as a child, I was carefree with no worries at all. I would wander like a deer upon the open fields. I enjoyed various natural beauties in the gardens together with my other friends. The days gone are gone forever. What I have are only memories that remain in my mind. The memories make me cry and laugh at times. Nevertheless, it is impossible to cut out them from life. For such reasons, childhood memories are said to be the sweetest in man’s life. I also got several memories of my childhood life. The memories present the sweetest time of my mind. My childhood memories are indeed sweet. No one can forget his/her childhood experiences whether painful or pleasant. I still remember my childhood life very well. I was born in the suburbs of Illinois where I spent my childhood. My father was employed by the government. Home was a simple house where we lived happily together with my parents, sisters and brothers. That is not all; there was another family member. That is my grandmother. She was humble and affectionate. I remember her trying to teach me recite some sayings and quotes. That was at an early age and my mind weak to capture them, speaking was still difficult task. She showed love to me most than other family members. The memories are bitter for I lost her a few years ago. Am lucky for my parents are still alive up to today. Both mum and dad have been by best teachers in language. They taught me how to pronounce simple words like â€Å"no†, â€Å"yes†, â€Å"come†, and â€Å"go†. There was a ground in front of our house. Paddy and other crops had grown in the ground. The beautiful golden color of the paddy field attracted me a lot. Every afternoon, I walked to the playground through the paddy field. The paddy plants always touched me. I got mixed up in the midst of the beauty of nature. I loudly named each flower as I went by playing. Though the names hardly meant anything, but at least I was better than

Saturday, November 2, 2019

Performance Appraisals Essay Example | Topics and Well Written Essays - 1000 words

Performance Appraisals - Essay Example Appraisal outcomes are used to recognize the shoddier performers who may require some form of counseling, or in extreme cases, relegation, discharge or reduction in pay. Performance appraisal engages an assessment of real against preferred performance. It also assists in assessing different factors which manipulate performance. Managers need to plan performance growth approaches in a planned way for each employee. Managers should keep the objectives of the organization in mind and plan at best possible exploitation of all accessible resources, including financial. Performance appraisal is a multistage procedure in which communication plays a significant part. (i) Essay appraisal method: The evaluator writes a short essay providing an evaluation of the strengths, weaknesses and potentials of the employee. In order to do so impartially, it is essential that the evaluator knows the employee well and should have interrelated with the employee. The time taken and contents of the essay differ between evaluators, essay ratings are complicated to evaluate. (ii) Graphic rating scale: A graphic scale evaluates a person on the eminence of his or her work (average; above average; outstanding; or unsatisfactory). Graphic scales seem basic in creation; they have a function in an extensive assortment of job responsibilities and are more reliable and reliable in comparison with essay appraisal. (iii) Field review method: To overcome the evaluator linked unfairness, essay and graphic rating techniques can be joined in an orderly evaluation procedure. In the field review method, 'an associate of the HRM staff convenes with a small group of evaluators from the supervisory units to talk about each rating, thoroughly recognizing areas of inter evaluator difference.' Field review evaluation is considered applicable and dependable, it is time consuming. (iv) Forced choice rating method: The forced-choice rating method does not engage conversation with managers unlike the field review method. This method has numerous differences; the most common is to compel the evaluator to choose the good and bad fit statement from a group of statements. These statements are subjective or attained in advance to evaluate the worker. The score or weights allocated to the individual statements are not exposed to the evaluator so that she or he cannot support any employee. In this way, the evaluator favoritism is mostly abolished and related standards of performance develop for an objective. This method is of little worth wherever performance appraisal interviews are carried out. (v) Critical incident appraisal method: In this technique, a manager explains significant confrontations, giving particulars of both constructive and unconstructive performance of the employee. These are then talked about with the employee. The conversation focuses on authentic behavior rather than on personality. This technique is well suitable for performance evaluation