Operational Effectiveness – Another Way of Looking at Performance

[ad_1]

Many organizations measure the performance of the organization on efficiency. The field of cost accounting is built around these concepts and principles. The problem is that efficiency is no longer an effective measure of what is happening in the operation of the organization. It is possible to achieve high levels of efficiency and still operate at a loss.

Today’s organization requires a different measure – Effectiveness In other words, how effective are the resources being applied to the operation? Efficiency – The ratio of the output to the input of any system – Skillfulness in avoiding wasted time and effort; “she did the work with great efficiency” [ant: inefficiency]

Effectiveness:

– The ability to identify and do the things that contribute to the organization.

– The emphasis of effectiveness is on ‘doing right things’ and not just solely ‘doing things right’ (which is what efficiency is about)

Benefits of Effectiveness

There are a number of benefits to applying effectiveness measures to the Reliability Oriented Organization:

– The practice of personal effectiveness creates results that are continuous rather than once and done.

– A focus on personal effectiveness greatly encourages the elimination of wasteful activities that do not produce a contribution to the organization’s economic results.

– Personal effectiveness is a skill that transfers with the person: they can apply it even when job roles or situations change. It is not something that no longer applies. It is a lifelong skill.

– On a personal level the knowledge that you are being effective reduces work stress and creates a feeling of well-being.

Use of Efficiency

In the recent past efficiency in business operations was used as the sole focus of a company’s improvement efforts. The logic was that if we can control our costs we can improve our profits. Efficiency focuses on how something is done in order to avoid waste in converting a physical input to a physical output. It is a yield based measure. This was a sensible approach when applied to repetitive operations which could be systematized with a high degree of predictive repeatability. Work Study and Organization & Methods improved efficiency. Factory automation and computers enabled the approach to be carried even further with outstanding success.

Problem with Efficiency

Efficiency focuses on how something is done in order to avoid waste in converting a physical input to a physical output. It is a yield based measure. This was a sensible approach when applied to repetitive operations which could be systematized with a high degree of predictive repeatability. In the process of introducing efficiency – which was often accompanied by significant changes in work practices – the labor force began to shift away from being composed of manual workers to being increasingly composed of people who were not bound by rigid procedures and processes. These people tended to be required to exercise judgment in their work based on their knowledge and experience. This became increasingly true as organizations changed rapidly to keep pace with transformations in the global marketplace. Doing the old job in the old way was no longer possible.

A Different Way of Looking At Throughput

In most companies. Managers think that if they have produced something, it should be called throughput. Throughput can be defined for the Reliability Oriented Manager as: “All of the money that comes into the company minus what it paid its vendors.” The concept is best explained by Eliyahu Goldratt in his novel: The Goal[i]. Goldratt masterfully explained the concept through the eyes of a plant manager who is tasked with saving his plant or shutting it down. The book, first published in 1984 is still worth the time to read it.

What is Throughput?

Throughput is the rate at which the system generates money through sales” (“Throughput” is sometimes referred to as “Throughput Contribution” and has similarities to the concept of “Contribution” in Marginal Costing which is sales revenues less “variable” costs – “variable” being defined according to the Marginal Costing philosophy.) [ii]

Throughput Accounting

Goldratt’s alternative to cost accounting begins with the idea that each organization has a goal and that better decisions increase its value. The goal for a profit maximizing firm is easily stated, to increase profit. This is called Throughput accounting. Throughput accounting uses three measures of income and expense:

1. Throughput

2. Investment

3. Operating expense

Investment is the money tied up in the system. This is money associated with inventory, machinery, buildings, and other assets and liabilities. In “The Goal, the term was used interchangeably between “Inventory” and “Investment.” The preferred term used in Throughput accounting is now only “investment.” One difference between Throughput Accounting and Cost Accounting is that inventory is valued strictly on totally variable cost associated with creating the inventory, not with additional cost allocations from overhead. Operating expense is the money the system spends in generating units. For physical products, OE is all expenses except the cost of the raw materials. OE includes maintenance, utilities, rent, taxes, payroll, etc.

Key Questions

Managers need to test proposed decisions against three questions. Will the proposed change:

– Increase Throughput? – How?

– Reduce Investment (Inventory) (money that cannot be used)? – How?

– Reduce Operating expense? – How?

Summary

Management needs to shift its thinking from cost accounting to take into account measures of effectiveness and they must begin to abandon simply measuring efficiency. They also need to redefine throughput to include a toatal range of raw materials into the system to sales out of the system.

References: [i] The Goal – Second Revised Edition, Eliyahu Goldratt, North River Press, Great Barrington, MA 1992. [ii] Throughput Accounting, Thomas Corbett, North River Press, Great Barrington, MA, 1998, p29

[ad_2]

Source by Brice Alvord

Opening up Your Structure – Folding Sliding Systems

[ad_1]

Doors- are much more than just entry and exit points. They provide a glimpse into the lifestyle of the people behind them. Be it an entrance to a home, office, shop or restaurant, it should be inviting and a reflection of the general air of the building. Entrance systems should blend themselves well with the architecture without losing its prime focus on – Protection and Privacy. Depending on the variable needs of the above two, doors could be ornamental, encrusted or transparent/translucent.

These days many options for space-saving yet functional entrance systems, walls and windows are pouring in the market. These range from glass folding doors, sliding doors, folding sliding doors, folding windows and an exhaustive range of wall systems. As against conventional door openings, folding systems allow opening up to 95% of the total width.

Glass doors are not only aesthetic and elegant to look, they also increase light and space, flexibly trying to integrate beauty with purpose. Glass doors or walls or windows provide exciting solutions without harming the character of the building. A set of folding sliding doors can remarkably blend spaces together, remove barriers between the outside and literally bring the outside in conforming to the natural environment around it. Folding sliding doors are also referred to as sliding folding doors, bi-fold, accordion door, folding windows and concertina door.

Aluminum and timber (wood) folding systems or a combination of both are equally viable options as is glass.

Apart from enhancing beauty, folding systems also emphasize on energy efficiency, protection against natural calamities like hurricane, burglaries and acting as an acoustic barrier.

On the energy efficiency quotient, the aluminum and wood folding doors and walls systems top the list while in acoustics nothing is better than glass. A brief update on each of these features is as below:

Energy efficiency:

The main factors that are considered in energy efficiency of a folding system are:

1. The U- factor- The rate at which heat leaves a building. The lesser the better.

2. The R- factor- This gives a measure of insulation that a wall/ door/ window is providing. Contrary to U factor, the value for R-factor should be higher for better insulation.

3. SHGC- Solar heat gain co-efficient This indicates how well a product blocks heat from the sun. The lower the number the better. A low SHGC means the window transmits less solar heat.

4. VT- Visible transmittance refers to visible light being transmitted. The higher the VT, the more light is transmitted.

5. Air leakage- Heat loss and gain occur by infiltration of air through the cracks in the window assembly. The lower the AL the better.

6. Condensation resistance- This measures the ability of a system to resist the formation of condensation on the interior surface of that product

Safety against Hurricane:

Superior glass folding systems like those of NanaWall Systems provide excellent protection against natural calamities like hurricane. Passing through intense tests like the Miami Dade County Test Protocols PA201, PA202, PA203, NanaWall folding systems are approved for use in hurricane sensitive localities.

Apart from hurricane, these system offer weather tightness and are much more capable of withstanding adverse weather.

Safety against Burglaries:

Glass systems include wire meshes within multi layers of glazed glass that offers security from hit from balls, axe or even bullets. Depending on the level of security required, one has an option to go for a suitable toughened glass. Laminated glass is the best option.

Acoustic barrier:

The acoustical performance of glass folding walls and doors is magnificent as it blocks out almost 75% of noise. Laminated glass with ingrained insulation is the best solution for those seeking acoustical separation from the outside world.

Why settle for less when one can have more with a folding door. So open up your space and let your abode breath in air!

[ad_2]

Source by Gracy Moore

The Meta-Model of Planned Change

[ad_1]

This a model of managing change in human systems based on the classic perspective of organizational development developed by the NTL Institute for Applied Behavioral Science. The classic perspective holds that the tasks of an organization-from planning to production to sales-are accomplished with the highest level of productivity when those tasks are supported by high quality of relationships among those responsible for them. With that in mind, the Meta-Model of Planned Change is offered. It is a model that believes in the empowerability of human systems and the people that live and work within them. Accordingly, it calls for collaborative strategies and tactics aimed at open communication and consensual decision-making.

A model is a descriptive system of information, theories, inferences, and implications used to represent and support the understanding of some phenomenon. Meta-, in the sense used here, is a context or framework. A meta-model could then be understood as a framework or context for a model-albeit, a model of a model. Therefore, a meta-model of planned change is a framework from which any number of more specific models of how to manage change in human systems can be understood and developed.

Our model is a three dimensional matrix with the horizontal axis describing the five iterative stages of any planned change project. The diagonal axis offers four levels of human systems-personal, interpersonal, group, and organization/community-to which the horizontal dimension can be applied. Though straight-forward these two dimensions can be difficult to use; that is, without the vertical axis. The vertical axis describes eight disciplines which can facilitate the success of any particular planned change effort. The last page of this article offers a graphic of the three dimensions.

The Stages of The Planned Change Process

The stages of the planned change process are contracting, data gathering, intervening, evaluating, and disengaging. They are not discrete-they overlap and are iterative. Often, they must be simultaneously orchestrated, as each can trigger the need for another. Any stage can lead to any other stage. Data-gathering, intervention, evaluation, and disengagement can all lead to re-contracting.

Contracting

People in any of several different roles undertake planned change efforts. This includes the person(s) with direct decision-making authority over a system or part of a system, as well as someone working or living within a system without direct decision-making authority. Someone from outside a system, called in for that purpose, could undertake planned change efforts. Regardless of the role they may be in, we will call those who undertake change projects change agents or change leaders. Again, in spite of the role, change leaders must contract for change with the other members of the system.

Contracting is the process of coming to agreement with those person(s) who are key to the success of a change project. If the change agent is the person in decision-making authority, the agent must contract for change with those who live and work under that authority. If the change agent works or lives within the system without decision-making authority, that person must first contract with the person in authority for the desired change. Together, they can contract with the other key people in the system. Similarly, a person from outside the system must first contract with the owner of the system, and together, they contract with the other key persons.

When organization-wide change is desired, or when a local change will have organizationwide impact, the change contract is best made at the highest level of management. Contracting at this level leverages the greatest accountability-rewards and penalties-for the desired change. Change occurs most efficiently from the point in the system that has the greatest impact for the least effort.

Effective change contracts specify must specify the following: 

  1. Change goals that are clear, internally consistent, and have a systemic and human values orientation. The most effective change goals are fully consonant with the well being of the system as a whole, as well as its members.
  2. Clear, defined roles of the project leader (the client) and process facilitator (consultant). It is important that the project leader have primary responsibility for the system under change. It is just as important that the project leader understand that he or she is there to lead with the support of the process facilitator. The process facilitator (consultant) must have the required skills to support the project leader in effective use of the five stages and eight disciplines of the Meta-Model.
  3. Collaborative, inclusive, consensus-building change processes. These processes should be consistent with the human values orientation of the change goals, and create the levels of committed buy-in necessary for successful projects.

Data Gathering

Once the initial contract has been established, the prudent change agent would insist on a data-gathering stage. This process serves several purposes: 

  1. It provides important information for the effective planning of specific interventions.
  2. It galvanizes the organizational energy in preparation for “something happening.”
  3. It provides an opportunity for some initial empowerment coaching of those from whom data will be gathered.

Data should be gathered regarding the following: 

  1. What is working in the targeted system?
  2. What needs improvement within the system?
  3. What has been done to facilitate improvement?
  4. What barriers occurred to such attempts?
  5. Reactions to the change goals and reasons for it.

Intervening

Implicit in the idea of the empowerability of human systems is the assumption that through improving relationships within the system, its leaders and members can begin to identify and resolve their own issues, and in the process create whatever change they wish. This could mean improving the relationships and resolving conflicts between system structures, groups, and individuals. At the intrapersonal level, some change action is often needed to help resolve the internal conflicts that bedevil many system executives and managers.

Interventions, as a stage in the total change process, are those actions designed to improve relationships within the target system. They are open communication, and develop more informed and inclusive decision-making processes. In their various forms, interventions include feedback to the system, team building, strategic planning, training, conflict management, and coaching.

Group facilitation and conflict management are the important skills necessary for designing and carrying out these interventions. These two skill sets require deep use of listening and straight-talk capacities. A systems orientation, where impact on the entire system is kept in mind, is essential. Of course, the ability to be flexible and congruent with any particular situation is fundamental. Conscious use of self is notable as the first of the planned change disciplines, and is described in the section on the Disciplines of Planned Change below.

Evaluating

The evaluating stage informs the change agent and the system about the results the interventions have had. It is as much an ongoing process as it is a specific stage. In essence, evaluation is a feedback based data-gathering process. This feedback will give the change leaders critical information about how the system has responded to an intervention, and how they might design the next intervention to be more effective. This concept is notably different from the use of feedback as an ineffective means of getting someone to change. It is more useful as a means of determining the quality of relationship that has, or has not been stimulated by a particular change action. Essentially, feedback is an evaluation process that can also be used to gather data about what can make a more effective next change action.

Evaluative processes can be as simple as asking how well something worked, and what might work better next time. More formal group processes can take a form where everyone takes a turn responding to an evaluative question, such as, ‘What did you learn about planned change this weekend?’ System-wide evaluations could be done, both at the end of a change project, and at periodic intervals after that to see how much staying power a certain systemic change might have. It is a good idea to have evaluative feedback processes built into a system’s ongoing routine to monitor the specific and general well-being of that system.

Disengagement

The process of completing or ending a change project is discussed only sparingly in the planned change literature. A typical disengagement process for the participants in the change project might include a closing evaluation session, statements of learnings gleaned from the project, and celebration of whatever successes were achieved.

In addition, the change leaders-task leader(s) and process facilitator(s)-should get together to formally agree that the project is complete, or otherwise at an end. Additional and personal feedback might be shared about what worked well or less well, and what might be done differently in a future project. Celebration would certainly be in order.

Appropriate closure and disengagement allow the system, and the people in it, to learn from their involvement in the project, and to let go and move effectively on to what is next.

The Disciplines of Planned Change in Human Systems

In order to create effectiveness within each of the prescribed stages of change, the following eight disciplines are offered. They directly support the notion of the empowerability of human systems, along with the people that live and work within them. Accordingly, they also support the use of collaborative strategies and tactics aimed at open communication and consensual decision-making.

Conscious Use of Self

The primary tool for anyone wishing to manage change in a human system is the configuration of intellectual, emotional, and physical energies that a particular person brings to the situation. That includes personality, abilities (particularly the ability to learn), and idiosyncrasies. Most change leaders have only begun to develop a full command of themselves. Instead, they tend to respond automatically to many situations. These automatic or habitual responses are the result of over-learning. Over-learning is extrapolating an appropriate learning from a past experience, and applying it too broadly to every other set of similar situations. Over-learning gives a ‘shotgun’ approach to life, where the impact of many intentions falls far from the anticipated results.

Another hindrance to conscious use of self is the way people define parts of ourselves as ‘okay,’ and other parts as ‘not okay.’ Too often, people deny large portions of ourselves that have define as ‘not okay.’ We want to see ourselves as male, not female, or female, not male. We want to see ourselves as ‘nice,’ but never as ‘mean.’ In this manner, people deprive ourselves of the inherent flexibility that comes with the multiple aspects and attitudes that make up their fundamental integrity. Often, people judge themselves too harshly.

In the processes of effective planned change, all the personal flexibility we can mustered is needed. How we present ourselves in one situation with one person is not likely to be very effective in another, though the situation or person may be similar. Part of that flexibility is the ability to notice when we might be mistaking our assumptions for sound and current data. This is a pervasive pitfall, both in the world and in managing change in human systems.

Effective use-of-self calls for learning how to be aware of and how to direct our own thoughts, emotions, and behaviors. As we move toward mastery, we will be more able to behave in such a manner that the systems within which we wish to manage change will respond in ways consistent with our goals and intentions.

Systems Orientation

A pervasive approach to change defines a goal, and then sets out in as straight a tactical line as possible to achieve that goal. Such an approach tries to ignore or run-over any intervening or obstructing variables, such as the fact that several people do not want the goal to be reached, nor appreciate the tactics being used. A systems orientation to planned change looks holistically at human systems. It understands that any change within a system will reverberate throughout the entire system, and impact, even seemingly unrelated parts of the system. Using a systems orientation we…

  1. Understand that systems are comprised of constellations of forces that must be aligned for efficient and successful change projects.
  2. Widen our perspective from our immediate goal to one that considers the entire system.
  3. Simultaneously orchestrate several coordinated change actions.
  4. Develop feedback loops that are sufficient to stay in touch with the impacts of our change strategies and their specific actions.

Consider the following in helping you to think systemically:

Universal Connectedness: Everything is connected to everything else-processes, thoughts, feelings, and actions. Everything that happens is connected to something else.

Mutual Responsibility: For things to be the way they are everything must be the way it is; therefore, responsibility is always mutual. Those who see themselves as “doing nothing” are contributing to the way things are by “doing nothing,” just as much as everyone else is doing.

Sufficient Sound and Current Data: These are needed to determine the system boundaries containing both the problem and the solution. Look to a larger system definition when problems seem intractable.

Leverage Points: This is that accessible point in the system that creates the greatest impact toward the desired change with the least effort. The most important leverage point is the person whose system it is. To contribute to their success, build a high equity relationship with that person. If the system is yours, build a support system you can count on to help you create success.

A Powerful Reframe: A systemic perspective takes away the popular notion of single-point fault, allowing for an easier transition to the infinite perspective. For example, pain reframed from a systemic perspective is a signal for healing rather a trigger for anger and fear.

A Function of Consciousness: Often, we are only consciousness of a very limited part of ourselves and of all that is going on around us. Effective systemic-orientation calls for being present to a larger portion of ourselves and to what is going on around us. Only then, will we begin to perceive systemic connectedness.

Sound and Current Data

An efficient and successful change process requires good information for effective planning and decision-making. Such a principle, though obvious, is necessary as a reminder against mistaking our assumptions for accurate information. Our need to be “right,” seen as “smart,” for not wanting to “rock the boat,” or upset the boss often overwhelms our need for sound and current data. Accordingly, many change efforts suffer from insufficient, inaccurate information, while others fall prey to power struggles over whose data is right, and whose is wrong. A related pitfall occurs when the need for conformity prohibits the essential data from surfacing.

An environment of openness, straight talking, truthfulness, and honesty can be built from effective conflict management and team-building processes. In this way, a safe environment can be created where sound and current data can openly exist.

Feedback

Systemic feedback is information from our environment about how it is responding to us. It is relevant data that is available to us at all times, though often, we fail to notice it. Systemic feedback allows us to evaluate how well the impact of our behavior is congruent with our intentions. The more we can fine-tune our behavior to be synchronous with our intentions, the greater our effectiveness as managers of change.

People often attempt to use personal feedback as a direct means of changing someone’s behavior. However, it is not very good at that. Personal feedback offered with that intention is often heard as criticism, which, often as not, generates defensiveness and resistance, rather than the desired change. So, when someone says to you, “May I give you some feedback?”-duck!

Managing personal feedback effectively calls for understanding two principles: 

  1. Feedback always says something about the giver, not necessarily anything about the receiver. Consequently, your initial response should be curiosity concerning the giver’s intentions, and then decide your next course of action.
  2. What is done with feedback is solely in the hands of the receiver. Consequently, be curious about why you are reacting the way you are, and then choose a response that gets you what you want more effectively.

Infinite Power

Traditional planned change approaches often call for identifying the person or people who are not in accord with a change project, and replacing them with those who are. This process typically leads to a series of finite, win/lose power struggles that change little and waste energy on non-productive activities. An alternative approach would be to focus on infinite, win/win change goals and strategies, as win/lose processes always generate lose/lose results in the long term unless our physical survival is at stake.

An important aspect of playing infinitely is to focus on changing the quality of relationships within the target system, rather than trying to change members who do not seem in accord with a proposed change. This is directly related to the processes of conflict management and team-building mentioned in previous sections.

Focusing on changing the quality of relationships, rather than trying to change people minimizes the need for power struggles. Open, collaborative decision-making processes are enabled, during which most individual needs can be met while centering on developing strategies and tactics aimed at the change goals.

Learn from Differences

Differences are the only sources of learning we have. When used for learning, differences are the progenitor of synergy, wherein the whole is greater than the sum of its parts. Too often, however, differences are used finitely to determine who wins and loses. Accordingly, they are the source of wasteful power struggles or creativity-deadening conformity aimed at avoiding power struggles. Organizations overvalue conformity-those with critical information, or new or differing ideas, are warned not to “rock-the-boat,” therefore, making sound and current data a rare commodity. The Bay of Pigs and Challenger disasters are two highly dramatic examples of this phenomenon. New, differing, and sorely needed ideas are repeatedly stifled by our need to be safe within finite organizational cultures.

The ability to learn from differences is a critical conscious use of self for change leaders. It will support them in maintaining the systemic, non-judgmental perspective. Such a perspective is necessary to use the differences within their systems for the learning and synergy needed to collaborate toward effective change processes. Given our socialized propensity toward operating from the finite perspective, this is easier said than done. The infinite perspective helps, as it allows change managers the support of strong and long-lasting partnerships and teams. Such support is doubly critical as the stress of change can move us swiftly back to the traditional, conformity-oriented way of operating. With support, a speedy return to learning from differences can be provided as needed.

Empowerment

The client, and his/her system, have the necessary power to manage change within their system once their energies are released through effective, infinitely-oriented processes. Of course, learning from differences though good conflict management and team building skills are concomitant with the infinite perspective. The potential success of many change projects is often minimized by system authorities or change agents who believe that they must make the change happen rather than empowering the systems, the groups of the systems, and the individuals to make the change.

Critical aspects of empowerment are the experiences of choice and influence. Consider, as I experience my behavior as influential, I will begin to experience choice about how I respond to my environment. Consequently, I begin to experience myself as powerful. The more powerful I feel, the more I will contribute my skill and energy to those who support my experience of choice and influence.

Personal empowerment without effective leadership, conflict management and team building, however, can lead to chaos. Groups are the fundamental units of human systems. Successful systemic change, then, calls for personal empowerment within the context of group empowerment, and within the context of decision-making parameters that support the success. Accordingly, our definition of empowerment is supporting self and others to discover their ability to experience a choice about how they respond to their environment on behalf of increasing the well being of themselves and their environment.

Support Systems

The ability to develop support systems is crucial to effective planned change for two reasons. First, systemic planned change will occur when the support for that change reaches critical mass among the members of that system. The success of your planned change efforts depends on our ability to develop empowering partnerships across a full range of differences using the infinite perspective of power.

Second, applying the eight disciplines to the five stages of planned change is a daunting task. Those who choose to take this on must develop strong support systems. Change in human systems is never created alone. It requires support systems. An initial support system might be one or two confidants. This small informal group might evolve into a larger group willing to take direct action and contribute to the critical mass that is crucial to success. We cannot manage systemic change alone. Develop support systems to help you strategize and operationalize your change strategy and to assist you in using yourself effectively.

The Meta-Model of Planned Change has one hundred and sixty boxes or applications. Maybe, one could distinctively master each and every one. In contrast, it might be more important to use the meta-model to develop ones own model of planned change tailored to ones own particular interests, goals, and skill. Just as important, have fun with it as you develop your own model.

[ad_2]

Source by Michael F. Broom

Recovering After Ransomware

[ad_1]

Ransomware is a computer malware virus that locks down your system and demands a ransom in order to unlock your files. Essentially there are two different types. Firstly PC-Locker which locks the whole machine and Data-Locker which encrypts specific data, but allows the machine to work. The main objective is to exhort money from the user, paid normally in a cryptocurrency such as bitcoin.

Identification and Decryption

You will firstly need to know the family name of the ransomware that has infected you. This is easier than it seems. Simply search malwarehunterteam and upload the ransom note. It will detect the family name and often guide you through the decryption. Once you have the family name, matching the note, the files can be decrypted using Teslacrypt 4.0. Firstly the encryption key will need to be set. Selecting the extension appended to the encrypted files will allow the tool to set the master key automatically. If in doubt, simply select <as original>.

Data Recovery

If this doesn’t work you will need to attempt a data recovery yourself. Often though the system can be too corrupted to get much back. Success will depend on a number of variables such as operating system, partitioning, priority on file overwriting, disk space handling etc). Recuva is probably one of the best tools available, but it’s best to use on an external hard drive rather than installing it on your own OS drive. Once installed simply run a deep scan and hopefully the files you’re looking for will be recovered.

New Encryption Ransomware Targeting Linux Systems

Known as Linux.Encoder.1 malware, personal and business websites are being attacked and a bitcoin payment of around $500 is being demanded for the decryption of files.

A vulnerability in the Magento CMS was discovered by attackers who quickly exploited the situation. Whilst a patch for critical vulnerability has now been issued for Magento, it is too late for those web administrators who awoke to find the message which included the chilling message:

“Your personal files are encrypted! Encryption was produced using a unique public key… to decrypt files you need to obtain the private key… you need to pay 1 bitcoin (~420USD)”

It is also thought that attacks could have taken place on other content management systems which makes the number affected currently unknown.

How The Malware Strikes

The malware hits through being executed with the levels of an administrator. All the home directories as well as associated website files are all affected with the damage being carried out using 128-bit AES crypto. This alone would be enough to cause a great deal of damage but the malware goes further in that it then scans the entire directory structure and encrypts various files of different types. Every directory it enters and causes damage to through encryption, a text file is dropped in which is the first thing the administrator sees when they log on.

There are certain elements the malware is seeking and these are:

  • Apache installations
  • Nginx installations
  • MySQL installs which are located in the structure of the targeted systems

From reports, it also seems that log directories are not immune to the attack and neither are the contents of the individual webpages. The last places it hits – and perhaps the most critical include:

  • Windows executables
  • Document files
  • Programme libraries
  • Javascript
  • Active Server (.asp)file Pages

The end result is that a system is being held to ransom with businesses knowing that if they can’t decrypt the files themselves then they have to either give in and pay the demand or have serious business disruption for an unknown period of time.

Demands made

In every directory encrypted, the malware attackers drop a text file called README_FOR_DECRYPT.txt. Demand for payment is made with the only way for decryption to take place being through a hidden site through a gateway.

If the affected person or business decides to pay, the malware is programmed to begin decrypting all the files and it then begins to undo the damage. It seems that it decrypts everything in the same order of encryption and the parting shot is that it deletes all the encrypted files as well as the ransom note itself.

Contact the Specialists

This new ransomware will require the services of a data recovery specialist. Make sure you inform them of any steps you have taken to recover the data yourself. This may be important and will no doubt effect the success rates.

[ad_2]

Source by Aran Pitter

Need a CCTV System

[ad_1]

This article helps you to specify a CCTV system; the intended audience for this guide being either an installing company or an end user. You should be aware there are many types of CCTV systems available on the market; these range from cheap cctv systems for basic monitoring, best value security camera systems for some form of identification and to high resolution security systems that lead to identification and prosecution.

A good security camera system will offer best value for money without compromise on the quality. There are many products available on the market which makes it very difficult to identify what products are suitable for your requirement. Sometimes, it is equally difficult to identify areas that are vulnerable and a suitable cctv camera to target that area. Most people forget that a cctv camera system is a long term investment and they should discuss their requirements with a technical sales person before they make the purchase.

Understanding cctv terminology can also be daunting, see our FAQ section for more details.

Understanding your Security requirements

Main reasons for your requirement of cctv security cameras will reflect the type of system you need. Some of the reasons for needing a security system could be:

– Shop theft

– Shop or home break-ins

– Vandalism

– Industrial espionage

– Danger to individuals from attack.

– Health and safety of individuals on the premises or site.

– To replace or reduce manned guarding.

– To supplement manned guarding, making them more efficient.

– To monitor persons entering and leaving the premises.

– To provide visual confirmation of intruders activating an alarm.

– To monitor a remote, unattended site.

Reasons for a system could be endless, but for a particular site, there will finite reasons for considering CCTV. If they cannot be listed, you probably don-t need it.

What is the possible solution-

Once a problem is understood, the next step is to find how a solution can be achieved. The solution could be in many forms – it could be an intruder alarm system, some form of deterrent (lighting, fencing and gates), a cctv system or manned guarding. Your need will depend on the circumstances and requirements on any particular site, but it is important to at least make a list and consider all the possibilities. Some options maybe impracticable and others maybe too expensive but you should finish up with a short list of possibilities. Quite often, the solution will point to a cctv system as this will be cheaper and more affordable.

Decided that you need CCTV Systems-

Before selecting the type of cctv system that will fulfill your requirements, you should consider; the type of cctv cameras you need, how you will monitor the system, will you require network access (remote internet access) and cabling.

Type of cctv cameras you need:-

Colour cameras generally require a higher level of lighting than their Black & White counterparts do. Colour cameras give the advantage of being able to easily distinguish and detect objects simply by their colours where Black & White cameras offer better resolution in low light conditions.

– Covert cameras. These cameras are so small they cannot be easily seen or are disguised as a different device (such as smoke detector, PIR etc).

– Day/Night cameras. These cameras switch from colour to black and white depending on lighting levels. They are ideal for variable lighting conditions.

– Night Vision cameras. These cameras have their own light source in a light spectrum that can’t be seen by the naked eye.

– Outdoor cameras. These cameras have hardened, waterproof outer bodies.

– Speed Dome cameras (Pan, Tilt, Zoom). These cameras allow for remote control of what the camera is pointed at and what it is focused on.

– Vandal Proof cameras. These cameras come in hardened cases that can resist physical abuse.

How you will monitor the cctv system-

– Main Output- Most CCTV DVRs have composite video output which can be viewed on standard TV Monitors (like AV input or SCART input)

– Spot out / Call output- This output is also composite Video which can be used to monitor cctv cameras in full screen mode in sequence.

– VGA output- this output is standard output used on PCs. Any VGA TFT LCD monitor can be used.

Network Access / Remote Access- CCTV DVR Access over the internet (broadband)

– Internet Access- Most CCTV DVRs now days have remote access via the internet

– Simplex- DVR can only perform either record or play back but cannot perform both simultaneously.

– Duplex- DVR can only perform two things simultaneously (record, play back or remote viwing but not all three simultaneously).

– Triplex- DVR will perform all three things simultaneously (record, play back and remote playback)

– Pentaplex – cctv DVR can carry record, playback, remote access, remote playback.

What type of cctv cables are there-

– Pre-made leads- these are pre-fabricated leads with BNC and power connectors already terminated on the cable. Very simple to install, no real skill required. These leads are design to carry low voltage (12V DC) upto a distance of around 35m. Distance greater than 35m will cause picture corruption with the camera.

– Local AC power – where the distance is greater than 35m, if cameras are powered locally, you can cover much greater distances. For distances upto 100m, RG59 coaxial cable could be used.

– Combined Coaxial cable with power- RG59 coaxial cable but 2 core power cable attached (like a Shotgun).

– CAT5E- Longer distances can be covered CAT5E in-conjunction with passive transceivers.

Selecting the most suitable cctv system is a compromise between the quality, area you want to cover and the overall budget. It is advisable that you have in-depth discussion with the technical sales person before you select the security cameras or the diy cctv system you need. A good technical person will try to understand your need, explain the difference between the various cctv cameras before any recommendation.

[ad_2]

Source by Alan Hayden

Rimage 8100 – 8100N Producer III Review

[ad_1]

The Rimage Producer III 8100 we tested had (4) CD / DVD burners, PrismPlus thermal printer, Rimage software version 8.1, and the DiscWatch light for viewable system status.

Rimage 8100 and 8100n systems come in a couple different configurations of CD, DVD and/or Blu-ray disc burners. In addition, there are two different thermal printer options – the PrismPlus and the Everest 600 (you can read a review of this printer by searching “Everest 600” on Google). Advanced features include remote job submission, job streaming, variable merge fields, label serialization, Windows API, rapid API and SDK, DVD video protection plug-in, DiscWatch light and multiple warranty service options.

We tested the Rimage 8100 for 3 months running multiple print and copy, print only, network submitted and DVD video protection jobs.

Price – Price for the Rimage unit we tested is $40,950. The 8100 is the most expensive CD / DVD publisher that we have ever used or tested. The expensive price tag may be justified depending on your requirements and needs. 1 star.

Speed – The Rimage 8100 produced 105 half full CDs and 47 half full DVDs in one hour. The throughput falls to 70/hour and 33/hour for completely full CDs and DVDs. This is the highest output of any integrated duplicator and printer on the market today, even for systems with more than four disc burners. The high output is attributed to the speed of the robotics as well as the true asynchronous burn and print capabilities of the Rimage software as well as the computer configuration. You can get 1000 CDs printed and copied in a 10-hour day. 5 Stars.

Bin Capacity – 300-disc capacity. Like most other Rimage systems, the 8100 and 8100n utilize a 4-bin carousal that gives the user the ability to load a maximum of 300 CDs and/or DVDs at a time. There are some CD / DVD duplicators on the market with 500 and 1000-disc capacities. 3.5 Stars.

Reliability – The Rimage 8100/8100n was extremely reliable in our 3 months of testing. The robust robotics and the PrismPlus printer performed at a very high level for the entire duration of our testing. Assuming you use good quality CD or DVD media that does not stick together, you will get all of your CDs and DVDs completed without error. 5 Stars.

Cost per Print – Using the PrismPlus printer and the black ribbon will net you a $0.03/disc or less cost depending on print coverage. If you use the red or blue ribbon, the cost per print is $0.04/disc. The CMY ribbon has a cost per print of $0.25/disc. The PrismPlus single color printing is the lowest in the industry. The Everest 600 printer option has a cost of about $0.32 per color print. 5 Stars.

Print Quality – The Rimage 8100 has two thermal printer options, the PrismPlus and the Everest 600. We tested a system with the PrismPlus thermal printer. The PrismPlus is ideal for monochrome solid logos, simple graphics, text and barcode printing. The other printer option is the Everest 600, which boasts photo-realistic 600x600dpi printing. The Everest 600 is ideal for full color, high-resolution disc printing. 5 Stars.

Print Durability – Both the PrismPlus and the Everest 600 printer are thermal transfer printers that are completely indelible and waterproof. In the case of the Everest 600, the color will not fade or lose their brilliance over time because the thermal re-transfer process protects the discs from external forces like moisture and UV rays. 5 Stars.

Easy of Use – The Rimage 8100 we tested connected to the provided PC server through one USB 2.0 cable and four Firewire cables. The proven Quic Disc and CD Designer software came pre-installed on the PC and are very easy to learn and use. Rimage does offer onsite installation and training for $1800, but in most cases your Rimage vendor can help you out over the phone or with an onsite visit if needed. 4.5 Stars.

Maintenance – All Rimage publishers and printers work best in a dust-free environment, so the warehouse is not the recommended place to set up this type of equipment. The PrismPlus and Everest printers require bi-monthly cleaning of the print head and air filters to archive the best printing results. In addition, keeping the drives and the input/output bins free from dust is recommended. 4 Stars.

Technical Support – Rimage has above average phone technical support for the CD / DVD equipment industry. To maximize uptime and customer satisfaction, Rimage offers a variety of one-site, rapid exchange and post-warranty options. After warranty repairs can be expensive, as they are with other manufactures in this niche. That being said, we recommend purchasing a Rimage 8100 from a reputable dealer that has the experience to answer your technical support issues on the first call, and that can help with your operational requirements and repairs. 4 Stars.

Advanced Features – Rimage Producer III systems have many advanced features that no other equipment manufacturers in this niche offer. The features that we found useful were the DiscWatch light which gives a visual indication of operational status, and the DVD Video protect plug-in which makes it impossible to copy or pirate your intellectual property. Rimage also provides a powerful API for custom integration. 5 Stars.

Conclusion – Rimage 8100 / 8100N (part# 530621-240 or 530641-240) is our top pick for high volume disc publishing and printing requirements of 10,000 or more standard 120mm CD-r, DVD-r, or Blu-Ray discs per month. Strengths include speed, reliability, low cost per print, and a host of advanced features like DVD Video Protect, custom API and a software developers kit.

Check out the links in the below resource box for more information and an unbeatable offer on the Rimage 8100 Producer III systems.

[ad_2]

Source by Kevin Gabrik

Web Programming – The Object-Oriented Programming (OOP) Approach

[ad_1]

Web programming is an aspect of web site development and the role of web programmer is very significant just as web designer’s role in web design aspect of web site development. Programming languages have developed from machine language to low-level language and then to high-level language. The high-level language which is a language close to natural language (the language we speak) is written using certain approaches. Notable are the monolithic and structural programming approaches. With the monolithic style, you write a whole program in one single block. In structured programming approach, a program is divided into blocks of codes called modules with each module performing a specific task. BASIC, COBOL, PASCAL, C, and DBASE that ran on MS-DOS platform could be written using both approaches.

Following the revolution of windows operating system, it became possible to write programs using a more advanced structured programming approach than the type used on MS-DOS platform. This is the Object-Oriented Programming (OOP) approach where a program is divided into classes and each class is subdivided into functions or methods with each function providing a specific service. C++ and Java are typical examples of Object-Oriented Programming (OOP) languages which were originally developed for non-web solutions. As the preference for web applications grew more and more according to the historical development of the internet and the historical development of web, the need to improve on scripting languages continued to arise and one of the ways they embarked on it was by making scripts Object-Oriented. Java applet and PHP (Hypertext Preprocessor) are examples of Object-Oriented Programming (OOP) languages for web solutions. PHP was originally non Object-Oriented but it has been fully upgraded to an Object-Oriented Programming language (OOP) demonstrating the 3 pillars of Object-Oriented Programming (OOP) – Encapsulation, Inheritance, and Polymorphism. Thus, it is possible to write server-side scripts in an Object-Oriented fashion.

Object-Oriented Programming (OOP) structures program into classes and functions or methods. To use a class and access the services rendered by each function, you must create an instance of the class. When an instance is created, an object is produced which is held by an object variable. It is this object that will now be used to access each function and make use of its service. The syntax of class instantiation statement for object creation varies from language to language. In PHP, you use the new keyword. For instance, if you have a class with name customer and you want to instantiate it and use the object to access function select_records() in the class, you go about it this way-

$cust = new customer();

$cust->select_records();

The first line created an instance of class customer and an object held by object variable $cust. The second line accesses the service provided by function select_records() with the object variable $cust. Java too uses the new keyword for object creation but the application of the keyword in C++ is different where it is used by a pointer variable during dynamic memory allocation. I mentioned earlier the three pillars of Object-Oriented Programming (OOP)-Encapsulation, Inheritance, and Polymorphism. They are the integral features of PHP. Encapsulation is the process of hiding all the details of an object that do not contribute to its essential characteristics. This is achieved by making all instance variables of a class private so that only the member functions of the class can access its private instance variables. Inheritance is a situation in which a class derives a set of attributes and related behavior from a parent class. The parent class is called super class or base class and the inheriting class is called sub class. The member variables of the super class become member variables of the sub class (derived class). In PHP, you use the keyword extends to implement inheritance just like Java, for example

class customer extends products

Polymorphism is an extension of inheritance. It is a situation when a sub class overrides a function in the super class. When a function or method is overridden, the name and the signature of the function in the super class are retained by the overriding function in the sub class but there is a change in the function code.

Another important feature of Object-oriented Programming (OOP) language is constructor. A constructor is a function or method bearing the same name as its class name and it is used for initialization of member variables and invoked as soon as the class is instantiated unlike other member functions that are invoked only with the use of the object variable. At this point, let us use submission of data with, for instance, fixed asset register form for further illustration. Your PHP script needs to retrieve data posted from the form, connect to database, print custom error messages and insert data into the database table. Using the Object-Oriented Programming (OOP) approach, you need 4 functions in the class-

  1. The constructor- to retrieve the posted data from the form.
  2. A function to connect to MySQL database.
  3. A function to insert record to the database using the INSERT SQL statement.
  4. A function to print custom error messages.

Because your program is in an organized form, it is easier to understand and debug. This will be highly appreciated when dealing with long and complex scripts like those incorporating basic stock broking principles. Within the limit of the structured programming capabilities of the non Object-Oriented Programming languages of BASIC, COBOL, PASCAL etc, you could organize program too by dividing it into smaller manageable modules. However, they lack the encapsulation, inheritance, and polymorphism capabilities of Object-Oriented Programming (OOP) which demonstrates a great advantage of the Object-Oriented Programming (OOP) approach.

Copyrights reserved.

[ad_2]

Source by Olumide Bola

Windbg Minidump Tutorial – Setting Up & Reading Minidump Files

[ad_1]

This is a tutorial on how to set up and read your minidump files when you receive a BSOD (blue screen of death) in the attempts to gain further insight as to the cause of the problem. First thing is first. Download the latest debugging tools from the Microsoft site.

Then go to Start/Start Search. Type i

the command cmd.

Then change directories to:

C:Program FilesDebugging Tools for Windows (x86)

by using the command:

cd c:program filesdebugging tools for windows (x86)

It’s case insensitive when using the cd command.

Then type in:

windbg.exe z c:windowsminidumpmini06190901.dmp c “!analyze v”

Your minidump file is located at C:WindowsMinidumpMini06200901.dmp. It’ll be in the form “MiniMMDDYY01.dmp”.

KERNEL SYMBOLS ARE WRONG. PLEASE FIX SYMBOLS TO DO ANALYSIS

If somewhere in the output of the Bugcheck Analysis you see an error like:

Kernel symbols are WRONG. Please fix symbols to do analysis.

Then it’s most likely that you are using previous and incompatible symbols or corrupt files or you don’t have the proper symbols at the specified location when the Windbg program was trying to analyze the minidump file. So what I did was open up the Windbg program located at C:Program FilesDebugging Tools for Windows (x86) (in Vista and I believe it’s the same location for XP).

SETTING THE SYMBOL FILE PATH VIA WINDBG COMMAND LINE:

This is an important step so ensure that your symbol path file is set correctly lest you get the kernel symbols are WRONG error or other types of errors. Now set the Symbol File Path (File/Symbol File Path) to:

SRVe:symbols[path to microsoft symbols path]

However, for some reason I found that in order to set the Symbol File Path in the “File/Symbol File Path” field you cannot change it directly with the field of “File/Symbol File Path”. So what I found that you need to change it through the Windbg command window by going to:

“View/Command”

In the bottom of the command window beside the “kd>” prompt type this in:

.sympath SRVe:symbols[path to microsoft symbols path].

The part between the two asterisks () is where the symbols from Microsoft’s servers will be downloaded to. It’s fairly large (approximately 22MB) so make sure that you have sufficient disk space.

SETTING SYMBOL FILE PATH IN THE ENVIRONMENT VARIABLE:

Alternatively, you can set it in your environment variable either in your system or user environment variable. To do this, click the WINDOWS KEY+e. The WINDOWS KEY is the key to the right of the LEFT CTRL key of the keyboard. This will open up Windows Explorer.

Then click on the “Advanced system settings” at the top left of the window. This step applies to Vista only. For XP users, simply click on the Advanced tab.

Then click on the button “Environment variable” at the bottom of the window.

Then click on the “New” button under System Variables. Again you can create the environment as a user environment variable instead.

In the “Variable Name” type:

_NT_SYMBOL_PATH

In the “Variable Value” type:

symsrvsymsrv.dlle:symbols[path to microsoft symbols path]

If you set the symbol file path as a system environment variable I believe you may have to reboot your computer in order for it to take effect.

OUTPUT OF WINDBG COMMAND

So the following is the output for my crash:

Microsoft (R) Windows Debugger Version 6.11.0001.404 X86

Copyright (c) Microsoft Corporation. All rights reserved.

Loading Dump File [c:windowsminidumpmini06260901.dmp]

Mini Kernel Dump File: Only registers and stack trace are available

Symbol search path is: SRVe:symbols[path to microsoft symbols]

Executable search path is:

Windows Server 2008/Windows Vista Kernel Version 6001 (Service Pack 1) MP (2 procs) Free x86 compatible

Product: WinNt, suite: TerminalServer SingleUserTS Personal

Built by: 6001.18226.x86fre.vistasp1_gdr.0903021506

Machine Name:

Kernel base = 0x8201d000 PsLoadedModuleList = 0x82134c70

Debug session time: Fri Jun 26 16:25:11.288 2009 (GMT7)

System Uptime: 0 days 21:39:36.148

Loading Kernel Symbols

………………………………………………………

……………………………………………………….

…………………………………………………..

Loading User Symbols

Loading unloaded module list

……………………….

Bugcheck Analysis

Use !analyze v to get detailed debugging information.

BugCheck A, {8cb5bcc0, 1b, 1, 820d0c1f}

Unable to load image SystemRootsystem32DRIVERSSymIMv.sys, Win32 error 0n2

WARNING: Unable to verify timestamp for SymIMv.sys

ERROR: Module load completed but symbols could not be loaded for SymIMv.sys

Unable to load image SystemRootsystem32DRIVERSNETw3v32.sys, Win32 error 0n2

WARNING: Unable to verify timestamp for NETw3v32.sys

ERROR: Module load completed but symbols could not be loaded for NETw3v32.sys

Processing initial command ‘!analyze v’

Probably caused by : tdx.sys ( tdx!TdxMessageTlRequestComplete+94 )

Followup: MachineOwner

0: kd> !analyze v

Bugcheck Analysis

IRQL_NOT_LESS_OR_EQUAL (a)

An attempt was made to access a pageable (or completely invalid) address at an

interrupt request level (IRQL) that is too high. This is usually

caused by drivers using improper addresses.

If a kernel debugger is available get the stack backtrace.

Arguments:

Arg1: 8cb5bcc0, memory referenced

Arg2: 0000001b, IRQL

Arg3: 00000001, bitfield :

bit 0 : value 0 = read operation, 1 = write operation

bit 3 : value 0 = not an execute operation, 1 = execute operation (only on chips which support this level of status)

Arg4: 820d0c1f, address which referenced memory

Debugging Details:

WRITE_ADDRESS: GetPointerFromAddress: unable to read from 82154868

Unable to read MiSystemVaType memory at 82134420

8cb5bcc0

CURRENT_IRQL: 1b

FAULTING_IP:

nt!KiUnwaitThread+19

820d0c1f 890a mov dword ptr [edx],ecx

CUSTOMER_CRASH_COUNT: 1

DEFAULT_BUCKET_ID: VISTA_DRIVER_FAULT

BUGCHECK_STR: 0xA

PROCESS_NAME: System

TRAP_FRAME: 4526c4 (.trap 0xffffffff4526c4)

ErrCode = 00000002

eax=85c5d4d8 ebx=00000000 ecx=8cb5bcc0 edx=8cb5bcc0 esi=85c5d420 edi=ed9c7048

eip=820d0c1f esp=452738 ebp=45274c iopl=0 nv up ei pl nz na pe nc

cs=0008 ss=0010 ds=0023 es=0023 fs=0030 gs=0000 efl=00010206

nt!KiUnwaitThread+0x19:

820d0c1f 890a mov dword ptr [edx],ecx ds:0023:8cb5bcc0=????????

Resetting default scope

LAST_CONTROL_TRANSFER: from 820d0c1f to 82077d24

STACK_TEXT:

4526c4 820d0c1f badb0d00 8cb5bcc0 87952ed0 nt!KiTrap0E+0x2ac

45274c 8205f486 00000002 85c5d420 ed9c7048 nt!KiUnwaitThread+0x19

452770 8205f52a ed9c7048 ed9c7008 00000000 nt!KiInsertQueueApc+0x2a0

452790 8205742b ed9c7048 00000000 00000000 nt!KeInsertQueueApc+0x4b

4527c8 8f989cd0 e79e1e88 e79e1f70 00000000 nt!IopfCompleteRequest+0x438

4527e0 8a869ce7 00000007 00000000 00000007 tdx!TdxMessageTlRequestComplete+0x94

452804 8a869d33 e79e1f70 e79e1e88 00000000 tcpip!UdpEndSendMessages+0xfa

45281c 8a560c7f e79e1e88 00000001 00000000 tcpip!UdpSendMessagesDatagramsComplete+0x22

STACK_COMMAND: kb

FOLLOWUP_IP:

tdx!TdxMessageTlRequestComplete+94

8f989cd0 6804010000 push 104h

SYMBOL_STACK_INDEX: 5

SYMBOL_NAME: tdx!TdxMessageTlRequestComplete+94

FOLLOWUP_NAME: MachineOwner

MODULE_NAME: tdx

IMAGE_NAME: tdx.sys

DEBUG_FLR_IMAGE_TIMESTAMP: 479190ee

FAILURE_BUCKET_ID: 0xA_tdx!TdxMessageTlRequestComplete+94

BUCKET_ID: 0xA_tdx!TdxMessageTlRequestComplete+94

Followup: MachineOwner

It looks like a bunch of hieroglyphic mumbo jumbo. However, if you look closely you can gain some further insight into the possible problem or cause of it. The PROCESS_NAME is System suggesting a system process. The MODULE_NAME is tdx.

OUTPUT KD COMMAND: LMVM TDX

The tdx was clickable for me which executes the command:

kd> lmvm tdx

as a kd command. The ‘lm’ in “lmvm” is Loaded Module. The ‘v’ is Verbose. The ‘m’ is a pattern match. From the debugger chm manual it states it as:

m Pattern

Specifies a pattern that the module name must match. Pattern can contain a variety of wildcard characters and specifiers. For more information about the syntax of this information, see String Wildcard Syntax.

You can find a lot of information from the chm manual when you download the windbg from Microsoft. It will located here:

C:Program FilesDebugging Tools for Windows (x86)debugger.chm

The output from the above command is:

0: kd> lmvm tdx

start end module name

8f97f000 8f995000 tdx (pdb symbols) c:Program FilesDebugging Tools for Windows (x86)symtdx.pdbCFB0726BF9864FDDA4B793D5E641E5531tdx.pdb

Loaded symbol image file: tdx.sys

Mapped memory image file: c:Program FilesDebugging Tools for Windows (x86)symtdx.sys479190EE16000tdx.sys

Image path: SystemRootsystem32DRIVERStdx.sys

Image name: tdx.sys

Timestamp: Fri Jan 18 21:55:58 2008 (479190EE)

CheckSum: 0001391F

ImageSize: 00016000

File version: 6.0.6001.18000

Product version: 6.0.6001.18000

File flags: 0 (Mask 3F)

File OS: 40004 NT Win32

File type: 3.6 Driver

File date: 00000000.00000000

Translations: 0409.04b0

CompanyName: Microsoft Corporation

ProductName: Microsoft® Windows® Operating System

InternalName: tdx.sys

OriginalFilename: tdx.sys

ProductVersion: 6.0.6001.18000

FileVersion: 6.0.6001.18000 (longhorn_rtm.0801181840)

FileDescription: TDI Translation Driver

LegalCopyright: © Microsoft Corporation. All rights reserved.

So we glean some more insight. Who makes the module and the possible cause of the problem.

I look at the STACK_TEXT and there are references to tcpip and NETIO which seems to allude to a network problem. So I googled others with a BSOD and tdx.sys problem and there is a hotfix for this problem. However, a BIG word of caution please do not download the hotfix if this particular problem does not apply to you. Microsoft suggests to use the Microsoft Update procedures which will include all hotfixes.

To obtain the link to the hotfix for the network problem Google “Hotfix 934611 microsoft”.

I did not download this hotfix but rather opted to updated my service pack. Currently, Vista is at Service Pack 2. I only had Service Pack 1. So I’ll see if this fixes the problem.

To check what Service Pack you have installed and what bit version (32bit or 64bit) go to:

“Start/Computer”. Rightclick “Computer” and then click “Properties”. You’ll see the Service Pack information under the heading “Windows Edition”. Under the heading “System” (around midway through the page) you’ll see “System type:” which will display whether you have 32bit or 64bit versions installed.

To obtain the Service Pack 2 for Vista Google “sp2 Vista Microsoft”.

[ad_2]

Source by Victor Kimura

What is a Voltage Controlled Oscillator (VCO)?

[ad_1]

Voltage controlled oscillators are commonly abbreviated as VCO. The VCO’s are electrical circuits that yield an oscillatory output voltage. A VCO is an oscillator whose output frequency is proportional to the applied input voltage. The parts of a VCO circuit has a LC tank circuit with an inductor(L) and a capacitor(C) along with one or two transistors accompanied by a buffer amplifier. A VCO gives a periodic output signal where the output signal parameter is directly related to level of input control voltage. The center frequency of a VCO is the frequency of the periodic output signal formed by the VCO when the input control voltage is set to a nominal level. The voltage-controlled oscillator has a characteristic gain, which often is expressed as a ratio of the VCO output frequency to the VCO input voltage.

VCO’s often utilize a variable control voltage input to produce a frequency output. The control voltage input typically may be tuned so that the VCO produces a desired, operational frequency output. The input control voltage is then adjusted up or down to control the frequency of the periodic output signal. A voltage controlled oscillator is capable of changing an oscillating frequency in response to a change in control voltages. A VCO typically employs one or more variable capacitors commonly called as varactors to allow for adjustment of the frequency of oscillation for the VCO. The tuning range of the VCO refers to the range of oscillation frequencies attained by varying the varactors.

Two important parameters in VCO design are sweep range and linearity. Linearity correlates the change in frequency or the VCO output to the change in the control voltage. The sweep range is the range of possible frequencies produced by VCO control voltage. Various types of VCO’s have been discovered so far. VCO’s comprised of bipolar junction transistors have been used to generate output ranging from 5 to 10MHz.

Voltage controlled oscillators are basic building blocks of many electronic systems especially phase-locked loops(PLL) and may be found in computer disk drives, wireless electronic equipment such as cellular telephones, and other systems in which oscillation frequency is controlled by an applied tuning voltage. The voltage oscillator components are almost an inevitable part of all digital communication equipments. VCO’s are used for producing local oscillator signals (LO) which are in turn received by the transmitter and the receiver systems for the frequency up conversion and the down conversion respectively. Wireless subscriber communication units such as the GSM use voltage oscillator circuits for generating radio frequency signals. The VCO’s are also employed in many synthesizer and tuner circuits and one best example for that is Television. A high frequency VCO is used in applications like processor clock distribution and generation, system synchronization and frequency synthesis.

[ad_2]

Source by Wayne S Holt

10 Effective and Easy Steps for Clean Room Design, ISO 14644

[ad_1]

In clean room design in which we establishing & maintaining an environment with a low level of environmental pollutants such as dust, airborne microbes, aerosol particles & chemical vapors. Designing the such sensitive environment like the clean room is not easy thing but below 10 steps definitely helps you and define the easy way to design it.

Most of the clean room manufacturer processes required the extremely stringent conditions provided by the clean room. Clean room design in each proper orderly way is very important, since cleanrooms have complex mechanical frameworks and high development, working, and vitality costs. Below steps present evaluating methods and cleanroom designing, people/material flow in factories, classification of space cleanliness, space pressurization, space supply airflow, space air exfiltration, space air balance, variables to be evaluated, selection of mechanical system, calculations of heating/cooling load, and requirements of support space.

1. People/Material Flow Evaluation Layout:

It is essential to assess the material and people stream inside the cleanroom suite. All critical processes should be isolated from personnel access doors and pathways, this help cleanroom labourers because they are a cleanroom’s biggest sullying source.

There should be strategy for critcal spaces that is the as compare to less critical spaces the most critical spaces should have a single access to prevent the space from being a pathway to other. Some pharmaceutical and biopharmaceutical processes are susceptible to cross-contamination from other pharmaceutical and biopharmaceutical processes. For material process isolation, raw material inflow routes and containment, and finished product outflow routes and containment the process cross-contamination needs to be carefully evaluated.

2. Indentify classification for Space Cleanliness:

It is very important to know the primary cleanroom classification standard and what the particulate performance requirements are for each cleanliness classification at the time of selection. It is very important to know the primary cleanroom classification standard and what the particulate performance requirements are for each cleanliness classification at the time of selection. There are different cleanliness classifications (1, 10, 100, 1000, 10000, and 100000) and the allowable number of particles at different particle sizes which provided by the Institute of Environmental Science and Technology (IEST) Standard 14644-1.

3. Indentify Pressurization for Space:

Keeping up a positive air space pressure, in connection to abutting dirtier tidiness order spaces, is basic in keeping contaminants from invading into a cleanroom. It is extremely hard to reliably keep up a space’s neatness order when it has unbiased or negative space pressurization. What should the space weight differential be between spaces? Different examinations assessed contaminant penetration into a cleanroom versus space weight differential between the cleanroom and connecting uncontrolled condition. These examinations found a weight differential of 0.03 to 0.05 in w.g. to be viable in diminishing contaminant invasion. Space weight differentials over 0.05 in. w.g. try not to give considerably better contaminant penetration control then 0.05 in. w.g.

4. Indentify Supply Airflow of Space:

The space cleanliness classification is the primary variable in determining a cleanroom’s supply airflow. Looking at table 3, each clean classification has an air change rate. For example, a Class 100,000 cleanroom has a 15 to 30 ach range. The cleanroom’s air change rate should take the anticipated activity within the cleanroom into account. A Class 100,000 (ISO 8) cleanroom having a low occupancy rate, low particle generating process, and positive space pressurization in relation to adjacent dirtier cleanliness spaces might use 15 ach, while the same cleanroom having high occupancy, frequent in/out traffic, high particle generating process, or neutral space pressurization will probably need 30 ach.

5. Indentify Air Exfiltration Flow of Space:

The larger part of cleanrooms are under positive weight, bringing about arranged air exfiltrating into connecting spaces having lower static weight and impromptu air exfiltration through electrical outlets, light apparatuses, window outlines, entryway outlines, divider/floor interface, divider/roof interface, and access entryways. It is critical to comprehend rooms are not hermetically fixed and do have spillage. An all around fixed cleanroom will have a 1% to 2% volume spillage rate. Is this spillage terrible? Not really.

6. Indentify Air Balance of Space:

The larger part of cleanrooms are under positive weight, bringing about arranged air exfiltrating into connecting spaces having lower static weight and impromptu air exfiltration through electrical outlets, light apparatuses, window outlines, entryway outlines, divider/floor interface, divider/roof interface, and access entryways. It is critical to comprehend rooms are not hermetically fixed and do have spillage. An each fixed cleanroom will have a 1% to 2% volume spillage rate. Is this spillage terrible? Not really.

7. Assess Remaining Variables:

Different factors waiting to be assessed include:

Temperature: Cleanroom specialists wear frocks or full bunny suits over their normal garments to lessen particulate age and potential tainting. As a result of their additional garments, it is critical to keep up a lower space temperature for specialist comfort. A space temperature extend somewhere in the range of 66°F and 70° will give agreeable conditions.

Humidity: Due to a cleanroom’s high wind stream, a vast electrostatic charge is created. At the point when the roof and dividers have a high electrostatic charge and space has a low relative dampness, airborne particulate will join itself to the surface. At the point when the space relative dampness expands, the electrostatic charge is released and all the caught particulate is discharged in a brief timeframe period, causing the cleanroom to leave detail. Having high electrostatic charge can likewise harm electrostatic release delicate materials. It is vital to keep the space relative moistness sufficiently high to lessen the electrostatic energize construct. A RH or 45% +5% is viewed as the ideal stickiness level.

Laminarity: Very basic procedures may require laminar stream to lessen the shot of pollutes getting into the air stream between the HEPA channel and the procedure. IEST Standard #IEST-WG-CC006 gives wind current laminarity necessities.

Electrostatic Discharge: Beyond the space humidification, a few procedures are exceptionally touchy to electrostatic release harm and it is important to introduce grounded conductive deck.

Vibration and Noice Levels: Some exactness forms are exceptionally delicate to clamor and vibration.

8.Mechanical System Layout Indentification:

Various factors influence a cleanroom’s mechanical framework design: space accessibility, accessible subsidizing, process necessities, neatness arrangement, required unwavering quality, vitality cost, construction standards, and neighborhood atmosphere. Not at all like typical A/C frameworks, cleanroom A/C frameworks have considerably more supply air than expected to meet cooling and warming burdens.

Class 100,000 (ISO 8) and lower ach Class 10,000 (ISO 7) cleanrooms can have all the air experience the AHU. Taking a gander at Figure 3, the arrival air and outside air are blended, separated, cooled, warmed, and humidified before being provided to terminal HEPA channels in the roof. To forestall contaminant distribution in the cleanroom, the arrival air is gotten by low divider returns. For higher class 10,000 (ISO 7) and cleaner cleanrooms, the wind currents are too high for all the air to experience the AHU. Taking a gander at Figure 4, a little part of the arrival air is sent back to the AHU for molding. The rest of the air is come back to the course fan.

9. Perform Cooling/Heating Calculations:

When playing out the cleanroom warming/cooling computations, think about the accompanying:

Utilize the most moderate atmosphere conditions (99.6% warming plan, 0.4% drybulb/middle wetbulb cooling stoop, and 0.4% wetbulb/middle drybulb cooling outline information).

  • Incorporate filtration into figurings.
  • Incorporate humidifier complex warmth into figurings.
  • Incorporate process stack into figurings.
  • Incorporate distribution fan warm into estimations.

10. Mechanical Room Space Fight

Cleanrooms are mechanically and electrically concentrated. As the cleanroom’s tidiness arrangement moves toward becoming cleaner, more mechanical framework space is expected to give satisfactory help to the cleanroom. Utilizing a 1,000-sq-ft cleanroom for instance, a Class 100,000 (ISO 8) cleanroom will require 250 to 400 sq ft of help space, a Class 10,000 (ISO 7) cleanroom will require 250 to 750 sq ft of help space, a Class 1,000 (ISO 6) cleanroom will require 500 to 1,000 sq ft of help space, and a Class 100 (ISO 5) cleanroom will require 750 to 1,500 sq ft of help space.

For Clean Room Design under expertise read also https://www.operonstrategist.com/clean-room-design-consultant/

[ad_2]

Source by Neha Mate