Technorati Profile Add to Technorati Favorites Add to Technorati Favorites Add to Technorati Favorites Industrial Engineering: 2009
Welcome To DESO Blog's
Info Beasiswa D1 D3 S1 S2
Info Lowongan Kerja

Tuesday, April 14, 2009

Brain Machine Interface Technology

Honda Research Institute Japan Co., Ltd. Advanced Telecommunications Research Institute International (ATR) and Shimadzu Corporation have collaboratively developed the worlds first*1 Brain Machine Interface (BMI) technology that uses electroencephalography (EEG) and near-infrared spectroscopy (NIRS) along with newly developed information extraction technology to enable control of a robot by human thought alone. It does not require any physical movement such as pressing buttons. This technology will be further developed for the application to human-friendly products in the future by integrating it with intelligent technologies and/or robotic technologies.

During the human thought process, slight electrical current and blood flow change occur in the brain. The most important factor in the development of the BMI technology is the accuracy of measuring and analyzing these changes. The newly developed BMI technology uses EEG, which measures changes in electrical potential on the scalp, and NIRS, which measures changes in cerebral blood flow, with a newly developed information extraction technology which enables statistical processing of the complex information from these two types of sensors. As a result, it became possible to distinguish brain activities with high precision without any physical motion, but just human thought alone.

source :humbleradio

Which supply chain design is right for You?


There are marked differences between supply chain designs. The challenge for

supply chain managers is to acknowledge that they may no longer be using the

optimal designs for the requirements of their businesses. It is widely accepted that the design of a product is responsible for more than 80 percent of its lifetime cost. In the same way, the design of a supply chain has a tremendous impact on the cost and value attributes of the product over its lifetime. The impact will only continue to grow as the battleground shifts in the 21st century from competition between organizations to competition between supply chains. Supply chain design will become a key source of competitive advantage. When considering supply chain designs, we can exploit some fundamental principles to enhance product flow across the value stream and to respond quickly to changing customer expectations. 1 However, supply chains are not static. The supply chain manager must continuously fine-tune planning and execution systems and the software that supports them to match evolving industry dynamics. What are these dynamics? The first is the fast-changing business landscape, itself. Customers are now requiring higher levels of service and attention, and the move toward personalization- the so-called "market of one" concept-puts pressure on those supply chains that are geared to mass markets. Second, competitors may deploy supply chains that give them an immediate edge, as happened when Dell launched its direct-sales, configure-to- order business model. Finally, there is now a wider range of more flexible supply chain designs, with fewer barriers to switching from one design to another. Complicating matters, however, is the fact that supply chain managers long familiar with an existing supply chain design may find it hard to understand or embrace different designs that better suit new market conditions. In this article, we review the four main types of supply chain design, examining their attributes and weaknesses. We argue that supply chain managers today must be prepared to review the efficacy of their current supply chain designs-and be ready to alter designs to better fit their companies' business needs.

*Four Types of Supply Chain Design *

Supply chains deliver products to the customer using one of the following four basic process structures.

1. Build-to-Stock (BTS).

The product is built prior to demand with a standard bill of materials-for example, Diet Cola. The BTS supply chain has the fastest response time to the customer. The customer order is placed and satisfied either from a retail shelf or from a finished-goods stocking point. Because the customer values immediate response for a BTS product, "impulse" products, such as many types of consumer goods, are supplied using a BTS model. However, the price of this immediate satisfaction is some loss of selectivity. The customer takes what is available in predetermined configurations supplied by the manufacturer. One common consequence of such limited choice: The
customer may purchase more product features than actually desired. The BTS model is by no means limited to discretionary consumer purchases. Many critical repair components, such as aircraft components, are supplied using a BTS supply chain design.

2. Configure-to- Order (CTO).

The product is assembled to demand with standard modules or components. Desktop computers offer an example. The CTO supply chain introduces orders prior to assembly and pushes the order to the customer but replenishes (pulls) parts to build he order. In this arrangement, the customer receives greater end-item choice but
sacrifices some of the immediacy of order fulfillment. The automobile industry offers another good example. Automakers and their distributors and dealers are in the initial stages of implementing CTO supply chains. The goal is to offer the customer a wider selection of color/option combinations than is typically available on the dealer lot. However, the customer will not be able to drive off the lot at the time of purchase; she must wait until the automobile is assembled to her specifications. A critical issue for those using (or considering) CTO supply chains is how quickly the customer's needs are satisfied; in particular, how much can they reduce the leadtime from assembly to final delivery. The North American automobile industry is now
targeting delivery of a custom-assembled car within a week of the order being placed, compared to the multi-week window in which it operates today.

3. Build-to-Order (BTO).

The product is fabricated and assembled to order with a standard bill of materials. Examples include executive jets and industrial machinery. In the BTO supply chain, customer orders are introduced prior to fabrication or at the start of the production process. BTO products are usually highly customized to customer specification, very costly to manufacture, or both. The BTO planning requirements are captured in a typical materials-requireme nts-planning (MRP) structure. In effect, BTO commits components, sometimes far upstream into the supply base, to specific customer orders. Once requirements are established, the supply chain produces to specific quantities and due dates. This is in contrast to the pull structure of BTS and CTO supply chains that respond to replenishment signals inside a planned capacity stream. Such locked-in quantities and due dates mean that MRP execution is subjected to significant
expediting and exception activities. Any disruption in the flow of materials can cause due-date slippage throughout the complete build sequence. So the typical MRP-driven supply chain reshuffles purchase-order due dates, the dispatch list,
and customer promises. These actions magnify the variations in capacity up and down the supply chain.

4. Engineer-to- Order (ETO).

The product is fabricated and assembled to order with unique parts and drawings. Example: A thermo-chemical reactor or the U.S. space station. This type of supply chain responds to a truly customized product that requires unique drawings and parts. Custom products manufactured for highly specific uses fit well into this category. The leadtime from order to delivery is often long because of the product's custom nature. Indeed, the front-end engineering is often a neglected but costly process within his supply chain. MRP planning prevails in ETO. The ETO supply chain is the prototypical single-lot, job-shop environment. Upstream planning and logistics are often varied and complex compared to downstream distribution. Distribution and transportation of ETO products are often planned in units of one.

*Design Trade-Offs *

Each supply chain alternative presents different value trade-offs for its participants. For example, in the BTS supply chain, suppliers speculate on assured demand by moving goods forward to satisfy the customer's immediate demands. Any forecast errors in numbers or types of finished products have to be corrected at the most inflexible point in the supply chain. Any overstock errors occur at the point where all components have been committed to a finished item. If the item is perishable or subject to obsolescence, the reverse logistics operation required to recapture value is at its most expensive point. Overstocked product must be moved, disassembled, remanufactured, and restocked. Likewise, under-stock errors require the supply chain to react from the point of the longest leadtime. That is, product shortages must be made up from components. In addition, when finished items are placed at the point of customer consumption, forecast errors at that location create forward stock-rebalancing activities that add to overall supply chain cost. Thus, the advantage of quick response times comes at the cost of inevitable errors in providing the right product at the right place.

>From the producer's perspective, the CTO supply chain has numerous advantages. First, there is greater postponement than with a BTS supply chain-the producer need not commit to a final product until an order is received. The components and modules may require some precommitment but not the end-item configurations, because the order is entered at the preassembly point. Second, because the producer doesn't commit to the finished item, there is less aggregate inventory because there is less diversity of modules than of finished products. Third, the producer need only forecast and plan at the component level, as we will discuss in more detail later in the section on "CTO Rate-Based Planning."

In the case of build-to-order, the customer waits the entire time for the product to be completed. For some products this can take weeks, months, or even years. BTO products are often manufactured from order backlogs, which serve as the "shock absorber" for variations in manufacturing and demand rates. Because customer leadtime is exacerbated by the backlog wait time, BTO manufacturers sometimes move the orders of preferred customers forward in the order-release schedule. However, such order "shoe-horning" disrupts the capacity planning of the manufacturing facility.

The BTO manufacturer has less speculation risk compared to BTS and CTO manufacturers because fabrication is not committed prior to a firm order. Although this feature benefits producers and their suppliers, it provides few benefits to customers. The customer accepts a long leadtime as a painful necessity in order to benefit from a high degree of customization. In addition, customers served by BTO supply chains are required to forecast their requirements over the leadtime fulfillment interval. These forecasts are often inaccurate, requiring order and delivery adjustments prior to promised delivery date. For example, at one BTO plant we noted that finished goods were sitting in warehouses and railcar sidings. Upon inquiry, the plant manager stated that they really didn't have finished-goods inventory because they built to order. Yet the physical evidence was before us. The finished goods were present because customers delayed receipt of goods as the due date approached. That is, customers' real-time needs caused them to request a delay in shipment. The customer's forecast error became this plant's finished-goods inventory.

In general, the trade-offs between BTS, CTO, BTO, and ETO supply chains can be characterized along five critical dimensions, as shown in the "Supply Chain Design and Value Trade-off" box on this page.

*Toward the Ideal Supply Chain Design *

The phrase "maximize external variety with minimal internal variety" succinctly captures the principle that managers should follow when designing any supply chain.3 In other words, the ideal design is one in which a small number of components are used to configure a large variety of end products. This principle can be accomplished by structuring product offerings so that material and resource commitment is postponed for as long as possible. We will refer to this as the RAP (or keep in-process inventory as "raw as possible") principle, which is shown in Exhibit 2. The RAP principle is essentially realized by the CTO supply chain, which differentiates the product only at the final-assembly stage. The extent to which the RAP principle can apply to a supply chain's design is often determined by the configuration of the item being produced and the leadtime requirements of the customer. But even consumer packaged goods that use a BTS system can still benefit from the raw-as-possible principle. For example, a bottling facility can employ the RAP principle by using in-line labeling of two-liter bottles rather than purchasing prelabeled bottles from suppliers. The product configuration is typically captured by its bill of materials (BoM). Therefore the first step in any supply chain design or redesign effort involves configuring or reconfiguring the BoM to facilitate the RAP principle. Companies can achieve an RAP supply chain design simply by pulling unique parts that are aggregrated at different BoM levels, or different points in the assembly process, to the same level across different end products. For example, in the apparel industry, the identification of a specific brand name (say, with a decal) can be done further downstream simply by altering the BoM accordingly. This postpones the point at which these unique parts (decals) are assembled onto the end products, thus delaying differentiation according to the RAP principle. Closely related to the RAP principle is the principle of aggregation, or risk pooling. It is well known that aggregated demand has lower variation than demand for individual products. In some instances, the BoM can be optimized to exploit risk pooling. The idea is to pull unique materials to the same location (BoM level) across multiple stock-keeping units. By so doing, the producer can strategically locate safety stock upstream to pool the risk across individual product BoMs.

*Shifting from One Design to Another *

Supply chain designs should respond to customer values rather than to being inherently appropriate for a particular type of product. That is, the product's structure and supply chain design should respond to changing customer requirements instead of being taken as a given, frozen in time. Indeed, we have observed two interesting shifts in supply chain design that align with the RAP principle: a marked shift from BTS to CTO designs and a similar movement from BTO to CTO. Shifting from BTS to CTO. As customers seek products tailored more closely to their needs, producers are widening their product offerings by extending their current product lines and adding new ones. There is, however, a trade-off between the scope of products offered and the resources required to support that scope. This trade-off is more severe for BTS supply chains. That is, a BTS supply chain requires greater "asset intensity" to support downstream distribution, inventory, and retail facilities than CTO or BTO supply chains. This asset intensity, in turn, must be supported by larger profit margins. The CTO model can provide better margins for many products because it often requires less asset and expense commitment to support downstream supply chain elements.

A number of industries are showing this shift from a BTS to a CTO model. Already, the recording industry is letting consumers configure music CDs by choosing their own songs. In the North American automobile market, some companies are experimenting with CTO supply chains for consumer-configured automobiles. Many consumer-electronic s products can be assembled and distributed quickly if a CTO supply chain is deployed. As Dell begins to move cautiously into the consumer-electronic s markets, we expect to see the segment migrate more quickly toward a CTO model. Shifting from BTO to CTO. The shift is even more dramatic as more traditional BTO companies are exploiting the RAP principle and moving toward CTO solutions. We have observed trends in this direction for the last five years across a number of industries. Historically, many BTO companies have argued that fast-cycle supply chains do not apply to them because they are highly customized "job shops." However, that stance incorrectly views the supply chain from the perspective of the final product. Many BTO supply chains actually use many common components, ingredients, and modules that are sufficiently repetitive for them to consider moving toward a CTO solution.

One company we know well-a producer of electrical conduit-has made that move. This company specializes in customized solutions for laying electrical conduit within commercial buildings. Every order used to be treated as an engineer-to- order job; as such, it was processed from custom drawings. Yet the company uses the same types and sizes of conduit on every job. We proposed that the company design the conduit in sections of standard lengths and angles. A software program could then arrange the custom-conduit layout using standardized elements for more than 95 percent of the job. The standard sections could then be pre-staged as uncommitted parts available for configuration. By applying this CTO supply chain design, the company's order-to-delivery cycle time was reduced by more than 80 percent.

A typical objection to such an initiative is that it creates greater inventory risk. Although that claim is technically true, the consequences are not as severe as might be expected. Process inventory is certainly required, but it is often less than the sum of in-process jobs crawling through the manufacturing facility in a traditional job-shop environment. In addition, the amount of in-process parts is matched to the estimated build rate per time period. If the planning and execution time cycle is fast enough, then the amount of pre-committed parts can remain relatively small, thus reducing the inventory risk.

*Relationship Between Design, Planning, and Cycle Alignment *

The supply chain design is strongly influenced by the product structure. In turn, the product structure influences both rate-based planning4 and cycle alignment. Rate-based planning is the mechanism for aligning capacity with execution in a lean supply chain, and cycle alignment is the degree to which demand and fulfillment are matched in time. These mechanisms are familiar to most supply chain managers, but amid the everyday challenges, it is easy to underestimate their significance to supply chain design. To show that impact, we will discuss the two mechanisms in more detail in context of BTS and CTO supply chains.

*BTS Rate-Based Planning *

Typical BTS producers use planning bills of materials to forecast replenishment requirements for finished products and components.5 With planning focused on the end items, the BTS manufacturer uses the end-item demand rate to estimate the rate of component replenishment. The greater the scope of the end items, the more inventory is required to support immediate response. Put another way, the cost of speculative end-item placement puts limits on the scope of end items that can be supported economically. For example, a beverage bottler can provide various beverage flavors and container sizes but cannot economically support finer classifications of flavor or size. While the beverage producer can provide several sizes of soda, it cannot provide additional single-ounce increments from five to 16 ounces without exceeding the inventory and space levels it can reasonably and economically support. Thus, there are trade-offs between scope and economics for the BTS producer.6

The BTS supplier uses the planning bill of materials to design the supply chain capacity requirements. Components are speculatively placed to support upstream replenishment signals. For example in a soft drink supply chain, the periodic consumption of finished goods forms the basis of the bottling facility's production schedule, as illustrated in Exhibit 3. In this figure, two-liter bottles of a soft drink has a forecasted demand rate of 5,000 units per time period. This forecast is used to determine the demand rate and the associated demand variation for each end item. For this example, the three-sigma standard deviation of the forecasted demand sets the boundaries of expected demand between 4,000 and 6,000 units, or plus or minus 20 percent of the forecasted consumption rate. Thus, the forecast establishes the pacing rate as well as the expected variation to be accommodated by the bottling lines in a given time period.

The relationship between the finished product and its ingredients (components) is established in the bill of materials (ingredient card). For example, if the ingredient card requires one pound of concentrate for every 500 units of two-liter cola, then the concentrate demand rate is established at 10 pounds to support the forecasted rate of 5,000 units. The concentrate rate boundaries are set at plus or minus 20 percent, as with the finished product. The demand rates for the remaining ingredients are evaluated similarly.

In this way, product "streams" can be planned upstream from the point of order fulfillment (finished goods). As goods are sold, replenishment signals move upstream to replenish the goods sold. The supply chain is able to respond because the stream capacity is in place to satisfy actual demands. One can imagine that this process of linked pull signals would ideally move all the way back to the suppliers of the bottles, for example. If the beverage company could configure the supply chain so that demands are satisfied from a centralized warehouse, with fast transportation replenishment using "milk runs" and in-transit transfers of product, then the aggregate inventory could be reduced. Consolidation confers the benefits of reduced inventory by pooling the demand variation of the forward stocking locations into a single location. This same result can be obtained by employing advanced shipping notices to support cross-docking at the forward stocking location. In this design, the advantages of transportation economies are achieved without committing the forward stocking location to stock levels to replenish demand. The demand is replenished upstream, while the forward stocking location merely breaks, cross-docks, and stages consolidated shipments into store-level runs.

*BTS Cycle Alignment *

The BTS supplier must also align what we term the "wheels" of the supply chain-that is the production, delivery, and demand cycles shown in Exhibit 4 on page 56. The wheels capture the time within which the supply chain can respond across the complete range of the products. Although this concept is understood by most supply chain managers, it is often not given sufficient emphasis. We regularly discuss the need for inventory to buffer variation, but rarely do we discuss the need for inventory to support cycle imbalances, which is usually much more important. In the case of supply, the cycle is measured as the amount of time required to provide all the components to support the product range; for production, it's the amount of time the production system is able to produce all the products; for delivery, it's the amount of time between delivery dates for all the products; and for demand, it's the amount of time to sell the complete range. The BTS supply chain principle is to align these wheels to the demand wheel, as shown in Exhibit 4. Assume, for simplicity, that the BTS beverage company packages five different flavors of beverage in a single shipment. Now suppose that all five flavors are purchased across the retail network every day. However, moving one step back in the supply chain, if the product is delivered every three days, then the retailer must stock three day's supply of each flavor on the shelf to avoid stockouts between deliveries. Since the delivery wheel is not aligned to the demand wheel, inventory must result. The preceding discussion underscores the retailers' need for frequent deliveries to ensure that the delivery wheel more closely matches the demand wheel, thus reducing shelf space requirements for a particular SKU without compromising fill rates. The discussion also highlights the benefits of using third-party logistics providers (3PLs). 3PLs can provide aggregated transportation logistics across multiple suppliers to support increased delivery frequency and scale economies.

Now consider the bottling facility. Assume that the bottling facility is only able to produce the five flavors every seven days because of long changeover times and large batch sizes between flavors. Therefore, the bottling facility will need to hold multiple days of finished-goods inventory to support the delivery trucks. This view of the supply chain makes the value of improvements very obvious. If the beverage supplier could create a more flexible bottling facility so that every flavor is produced every day, and the delivery system could be made to deliver every flavor every day, then the whole supply chain would be fully aligned with the demand wheel. Such a configuration would minimize the amount of "cycle" inventory to the amount needed to support a single day rate plus safety stock to cover daily variation. Such a supply chain would flow to the rate of demand.

The speed with which forecasted demand is adjusted is a function of the sales and operations planning (S&OP) cycle. For example, a monthly planning cycle provides only monthly changes in the planned build and replenishment rates. Any change in forecasted demand must wait for new monthly signals from the S&OP process before the supply chain rate can be adjusted. Such long planning cycles usually generate large rate changes since demand changes are made over a monthly cycle. As the planning cycle improves to weekly or daily, the underlying planning rate can adjust more quickly to demand changes. As the planning cycle moves toward weekly and daily cycles, the supply chain can adjust more smoothly to demand changes. This adjustment must, of course, be coupled with increased flexibility within the supply chain to accommodate the revised rate broadcasts. Such flexibility is a function of engineering variables (constraints) inherent within the process. For example, for now the automobile industry can only economically adjust build rates biweekly, because of the cost of changing the assembly line.

*CTO Rate-Based Planning *

The planning structure for a CTO supply chain is designed such that there is less variety in components than in end-items. The supplier does its planning at the module level. For example, Dell does not plan using the demand-rate forecast for its finished products; there are too many possible variations for that to be worthwhile. Rather, Dell uses order management information to forecast module- and component-demand rates and variation. That way, its aggregate demand and variation can be determined at the component level by summing component averages and pooling variances across bills of materials. The upstream suppliers can use that rate information to provide immediate replenishment within the planned capacity stream. The order management system would track actual component and module consumption as orders are configured by customers. If the component demand exceeded the planned maximum capacity, the order management system would respond with an exception. The customer would be told that the component selection is not available-to- promise within normal delivery leadtimes, and an extended-promise date would be offered. The customer could then select the extended-promise date or select a different component that may not have exceeded planned capacity. Because the CTO producer builds items to customers' specifications, it cannot afford to lose leadtime by waiting for orders to aggregate into an economical shipment size for a particular region. The producer can gain logistical economies by combining custom orders into a single shipment. Such a strategy would reduce the amount of time any particular order waits for an economic shipment size.

*Continual Review Needed *

There is no one-size-fits- all supply chain design. Nor is there any guarantee of permanence for a supply chain design. Rather supply chain designs need to change as market sectors are buffeted by shifting customer needs, by unexpected moves from competitors, and by many other factors both internal and external. We have outlined four fundamental supply chain designs and explained the advantages and limitations of each. Each differs in terms of rate-basedplanning methodologies, order-point initiation, stocking strategies, degree of speculation risk, RAP potential, and push/pull signal placements. We have also pointed to important shifts from one design to another and provided additional viewpoints for examining supply chain performance with fresh eyes. Supply chain design cannot remain static. Just as the Internet has challenged preconceived notions of music distribution, so will technology, product, structure, and customer requirements change supply chain architecture.

It is our belief that supply chain managers expose their companies to unnecessary risk when they do not continually review the appropriateness of their supply chain designs. It is crucial that they evaluate competitive dynamics and artfully select the correct design for the correct set of opportunities.

Source :James M. Reeve and Mandyam M. Srinivasan

Balanced Scorecard


balanced scorecard, (BSC) adalah suatu konsep untuk mengukur apakah aktivitas-aktivitas operasional suatu perusahaan dalam skala yang lebih kecil sejalan dengan sasaran yang lebih besar dalam hal visi dan strategi. BSC pertama kali dikembangkan dan digunakan pada perusahaan Analog Devices pada tahun 1987. Dengan tidak berfokus hanya pada berfokus pada hasil finansial melainkan juga masalah manusia, BSC membantu memberikan pandangan yang lebih menyeluruh pada suatu perusahaan yang pada gilirannya akan membantu organisasi untuk bertindak sesuai tujuan jangka panjangnya. Sistem manajemen strategis membantu manajer untuk berfokus pada ukuran kinerja sambil menyeimbangkan sasaran finansial dengan perspektif pelanggan, proses, dan karyawan.

source :wikipedia

Thursday, March 5, 2009

HOW TO EFFECTIVELY MANAGE DEMAND WITH DEMAND SENSING AND SHAPING USING POINT OF SALES DATA

Predicting what and when the consumer will buy has never been an easy process for manufacturers or retailers. Burdened by the daunting task of precisely matching supply with demand, manufacturers are constantly improving processes to achieve the highest forecast accuracy to ensure when the consumer walks into the store, the product they are looking for is on the shelf. During times of economic uncertainty, the need for more accurate forecasting becomes increasingly critical as companies work to trim costs in the supply chain to ensure stability and profitable growth-without sacrificing customer service.

With this in mind, manufacturers are actively looking for the best methods to gain visibility or "sense" consumer demand. These efforts have included programs such as Vendor Managed Inventory (VMI), Efficient Consumer Response (ECR), and Collaborative Planning, Forecasting and Replenishment (CPFR). But each of these initiatives has challenges and does not necessarily capture true consumer demand. With VMI and ECR, inventory policy management drives the replenishment process instead of consumer demand. With CPFR the driver is demand planning. CPFR is focused on a broader buyer-seller relationship, which gives the manufacturer greater visibility of demand along with more time to respond to specific changes including planned promotions or special events.

With demand planning as the key driver in the CPFR process, the challenge is deciding which demand signals will drive the collaboration process. Many companies prefer to leverage multiple demand signals (order history, shipments, Point of Sales (POS), new product plans, promotions, syndicated data, etc.) to calculate their forecasts.

Even though CPFR programs have been successful in getting consumer-centric supply chains on the same page, some initiatives have not achieved the mass adoption anticipated for several reasons. Most of the companies still relying on product-centric supply chains-in which manufacturers push their goods out into retail and wholesale channels-can no longer afford to build inventory and wait for customers to buy. They need to get as close to consumer demand as possible. Leveraging POS data is an excellent way.

The growing use of POS data helps manufacturers take a giant step in using consumer driven demand. Over the past several years, the availability and accuracy of POS has increased dramatically. POS data is often viewed to give a truer picture of consumer demand because it is unencumbered by elements such as batch sizing, shipping quantities, and lead times. It can provide an early indicator of what's selling and what's not-critical information for new product introductions (NPIs) or short shelf-life products. This gives manufacturers the lead time necessary to make adjustments in their manufacturing and distribution plans, ensuring they are appropriately positioned to meet changing demand patterns.

The adoption and availability of technology that captures and analyzes POS data has led to its increasing integration into the demand management process. Manufacturers that have successfully incorporated POS data into the demand management process are experiencing increased forecast accuracy, improved NPIs, lower out-of-stocks, and decreased total costs. According to AMR Research, "Reducing out-of-stocks can contribute as much as 4% to the bottom line."

For example, when a consumer goes into a mass merchant retailer and buys a popular mascara, the product is scanned at checkout, creating POS data. The mascara manufacturer receives a demand signal that indicates the specific product has been purchased at a specific store at a specific date and time. They gain visibility and start refining their replenishment plans within hours. This granularity gives a very clear picture. Relying on traditional forecasting techniques would base the plan on order or shipping information. The forecaster would assume that because 10 cartons of mascara were shipped from the distribution center in September, market demand equals 10 cartons. Better visibility requires leveraging POS data to refine the demand plan and truly understand timephased replenishment needs.

BENEFITS AND CHALLENGES OF POS DATA

The benefits of leveraging POS data as a primary or secondary demand signal are significant. It improves accuracy because scanning is far more precise than keying in numbers from a pricing label. It gives a quick, near real-time look at the products moving through a specific retail channel. Additionally, POS data provides a granular SKU/store-level insight along with aggregate information to better manage inventory, trigger replenishment, and analyze sales patterns including the success of promotion plans.

There are also challenges that come with POS-based demand signals. First, syndicated data available from IRI, Nielsen and others, the primary source for corporate marketing analysis because of its broad sample, is available only at the category level. Furthermore, syndicated data does not include POS data of major retailers like Wal-Mart and thus may not cover global markets. The format in which data is available also creates problems because there is no single standard format, including EDI (Electronic Data Interchange), calendars, and selling horizons on the data from major retail chains. Also, since POS data is available in huge volume, it can be challenging to manage in a timely fashion. However, over the past five years or so, POS data accuracy has improved significantly as a result of technology upgrades and consistent cashier training. Additionally, retailers and manufacturers have a better understanding of the rich data and timely insight that can be harvested through POS .

Manufacturers must be driven by the demand of consumers and thus utilize POS data to more accurately predict, sense, plan, and respond to real-time demand signals across a global network of suppliers, retailers, and consumers. When executed effectively, a supply chain driven by consumer demand will positively impact profitability, inventory investments, customer satisfaction, and asset utilization. In The Handbook for Becoming Demand-Driven, AMR Research published a compelling assessment of the benefits of a consumerdriven demand business model, which found that the most advanced demand sensing companies have achieved a distinct competitive advantage across the business including 15% less inventory, 17% higher perfect order performance, and 35% shorter cash-to-cash cycle time.

BECOMING MORE DEMAND-DRIVEN

So how do manufacturers become demand-driven? You must first realize that technology alone will not transition you from a product-centric to a consumercentric manufacturer. It requires a combination of people, process, and technology. With a common focus on the consumer, you can begin breaking down internal corporate barriers between departments. This will ensure the commitments you make to retail customers will be delivered by your entire organization. Don't let organizational silos limit your success.

Realizing, that manufacturers struggled to respond to sudden spikes in demand or unexpected surges in inventory, profitable growth continues to be an elusive goal. In fact, the majority of manufacturers are driven by forecasts modeled on historical shipments or orders to retailers rather than on sales to end consumers. This traditional approach is geared to keeping production plants efficient but often leads manufacturers to "stuff" their channels with costly excess inventory.

With shorter product life cycles, increasing customer expectations, and needing to support a portfolio of NPIs, supply chain leaders must supplement the traditional demand planning process with POS demand signals. With more timely access to POS data, manufacturers can sense changes to demand more quickly. Figure 1 shows the researchfirm, Aberdeen Group, finds that 50% of companies report that it takes more than one month to sense changes in demand, which is unacceptable in today's marketplace.

Aberdeen recommends that companies with any of the following attributes should focus on establishing more rapid demand sensing and response capabilities:

* Maintain safety stock levels to account for poor short-term forecast accuracy.

* Have short-to-medium lead times for products (one to six weeks).

* Use promotion-intensive marketing strategies that require strong SKU-level forecast accuracy.

* Fail to have the right SKU mix on retail shelves, creating measurable losses in sales and gross margins.

* A desire to improve customer-service levels, as well as to have smoother upstream manufacturing processes.

Supply chain technology has evolved quickly over the past decade, often outpacing capabilities that manufacturers have been willing or able to implement. The current dynamic market may be the catalyst needed to give a consumer-driven business strategy a top priority. Even with the increase in the use and availability of POS data for demand sensing, it is just one component of a comprehensive demand management process that also includes demand planning, demand shaping, and demand collaboration to profitably and efficiently match supply with demand. (Demand sensing means sensing a change in the demand pattern using downstream data such as POS and Radio Frequency Identification-RFID; demand shaping, on the other hand, means to shape the demand using marketing, price, promotion, trade and sales incentives.) Together, these techniques establish a process for predicting, dynamically sensing, shaping, and reshaping demand to move your business from reacting to demand, to anticipating demand, to driving demand across strategic, operational, and tactical planning horizons.

KEYS TO A ROBUST DEMAND MANAGEMENT

Although the concepts of demand planning, demand sensing, demand shaping, and demand collaboration seem simple, their implementation requires a holistic approach to demand management. With a consumer-driven demand business strategy, forecasting increases in importance as it evolves into a robust demand managementprocessthatbecomes the foundation for upstream supply chain activities. To gain valuable insight into your business, consider the following changes in demand management.

* Plan for Key Customers and Channels: It is critical to model and forecast your top customers and/ or channels to better understand key drivers, specific demand patterns, and customer service requirements. Although readily available in leading demand planning solutions, only a minority of manufacturers leverage customer-level and/or channel-level forecasting for their most important customer ship-to locations.

* Leverage Downstream Data: Complement traditional forecasting processes with downstream data including syndicated data, POS, and, in some cases, RFID. These more timely indicators coming from consumer demand will increase supply chain responsiveness to current market activity.

* Flexible Planning Periods: If appropriate for your business, the availability of POS data can help your team move from a monthly to a weekly forecasting process. At a minimum, it will allow you to fine-tune and synchronize replenishment activities. Most businesses need a comprehensive demand plan that supports strategic, operational, and tactical planning horizons for a variety of roles in the business. An accurate long-term forecast is critical to aligning production capacity and sourcing resources.

If adopted as a key demand signal in the demand planning process, POS data gives manufacturers a strong competitive advantage. Particularly in the CPG industry, POS data can significantly decrease shelf level out-of-stock rates and increase demand forecast accuracy. More and more manufacturers have turned to demand planning solutions to increase visibility and capture and use POS data to create the demand forecast.

FUTURE OF POS DATA

In today's economy, the demanddriven supply chain is a powerful weapon for businesses of all sizes. With dynamic market swings, volatile fuel prices, less predictable consumers, and intense global competition, corporate profit margins are experiencing a convergence of pressures. While researchers have talked about the importance of using POS data in the demand management process for almost a decade, the adoption rate by manufacturers was initially slow. Today, many fastmoving consumer goods manufacturers are increasing forecast accuracy by leveraging POS data in addition to data from orders and shipments. The insight that a planner gains by having access into multiple demand signals has been proven to advance the accuracy of the overall forecasting process and ensure improved customer service.

Consumers are fickle and predicting what they will purchase is never an exact science, but the technology to aid in this process has reached a level where it is possible to establish a clear and timely prediction. The volatility of the economy is always at the forefront of a manufacturer's strategic direction. As a result, finding a better way to manage demand becomes more critical to building profitable growth. As the accessibility and reliability of POS improves as well as the adoption of demand planning systems to leverage the data, more and more manufacturers will benefit from the POS demand signal to reduce inventory, improve customer service, and boost profitability.

Source :Karin Bursa

Tuesday, February 17, 2009

Approximate Analysis and Optimization of Batch Ordering Policies in Capacitated Supply Chains

Devising manufacturing/distribution strategies for supply chains and determining their parameter values have been challenging problems. Linking production management to stock keeping processes improves the planning of the supply chain activities, including material management, culminating in improved customer service levels. In this study, we investigate a multi-echelon supply chain consisting of a supplier, a plant, a distribution center and a retailer. Material flow between stages is driven by reorder point/order quantity inventory control policies. We develop a model to analyze supply chain behavior using some key performance metrics such as the time averages of inventory and backorder levels, as well as customer service levels at each echelon. The model is validated against simulation, yielding good agreement of robust performance metrics. The metrics are then used within an optimization framework to design the supply chain so as to minimize expected total system costs. The outcome of the optimization framework specifies how to move inventory throughout the supply chain and how to set inventory control parameters, reorder levels and replenishment batch sizes.

Source : Abdullah Karaman, Tayfur Altiok

Monday, February 16, 2009

Organizational Viewpoint of the Relationships in Supply Chains

A supply chain is a strand, or chain, of operations that passes through an organization's supply network. Many different terms (and the concepts described by them - e.g., purchasing and supply management, physical distribution management, logistics, merchandising, material management, and SCM), some of which overlap are used to describes various parts of the SC. They represent an increasing degree of integration among the SC linkages.

SCM is a broader and strategically more significant concept that includes the entire SC, from the supply of raw materials through manufacture, assembly, and distribution, to the end customer. It includes the strategic and long-term consideration of SCM issues as well as the shorter term control of flow throughout the SC.

The exact nature of the relationship among the different linkages within the SC can be viewed on a continuum that goes from highly integrated at one extreme to temporary and short-term trading commitments at the other. The organization tries to define the totality of working tasks, their mutual relationships, connections and synergies, as well as mechanisms for the suitable connection and coordination of organizational factors.

For the formation of the working structure, the organization is able to use different starting points. Modern conditions of work demand that organizations include into the structure plans the characteristics of the informational process needed for the realization of organizational aims. The organization can use the traditional or modern plan for the formation of the structure. The traditional access to the formation of the structure is based on considering vertical informational connections. But for most organizations, the vertical connections are not enough, so they fulfill them with horizontal connections.

For the formation of the organizational structure, the organization can use functional grouping, divisional grouping, grouping with more viewpoints, and horizontal grouping. Different structures in the frame of functional, divisional, and horizontal structures define different levels of coordination and integration of an organization.

In the frame of functional and divisional structures, the management is able to create total business support by forming whole vertical and horizontal connections. In such a way, the efficiency of present vertical connections will increase and the integration of the organizational work will improve. In horizontal structures, the activities are organized horizontally around the basic (or key) working process. Most organizations form specific hybrid structures, which include characteristics of two or more structure types, on these starting points.

Source Dr. Vojko Potocan, University of Maribor, Slovenia

Monday, January 26, 2009

What are ERP and CRM?

ERP is a process that helps you put any and all resources involved with an organization to the best possible use. ERP has had other names in its past iterations: Materials Resource Planning and Manufacturing Resource Planning. Manufacturing Resource Planning shows that, at its roots, it was used as a tool most often in a manufacturing environment. Typically, it was used in reference to a process with several discrete operations or discrete objects, many of which can be broken down further into atomic level objects or processes. An example would be a simple wooden bar stool. A bar stool with three legs, three dowels connecting those legs at a predefined space interval, and a round wooden seat. A process might be to drill the hole for leg one into the bottom of the seat piece. There would be three similar processes like that one, one for each leg into the seat. Each leg might have a process assigned to it of drilling two holes, each hole has a depth and a diameter and an angle in reference to the leg and an angle in reference to the other legs. The finished product (bar stool) as a whole has a demand for each component (e.g., legs, screws, seat) and you have a predefined amount that is allocated to waste. Tracking all of this information, as well as tracking those times when the projected numbers fall outside of the expected ranges are all things that historically were tracked by a MRP system either in a spreadsheet, in a notebook, or in early databases (usually with homegrown applications built as a front end).

ERP methodology has grown significantly from its manufacturing roots, although many times MRP is still the basis from which the implementation of an ERP system grows. Today the concept of ERP often refers to a broad set of activities that a company or an enterprise performs, both internally and externally. The computerized system that is often referred to when discussing the management of planning of an enterprise's resources (all resources, including money, physical, and people) is an integrated solution. Such a software system is typically made up of multiple modules that interact together, share information amongst themselves and each other, and provide management with a broad, all-encompassing picture of the entire enterprise. These systems can now be used to meet needs in any industry.

Within the software is stored the information that management needs to operate its business day to day. ERP software systems break down the departmental barriers that sometimes still exist in organizations and allow the information that may have been in silos before to be shared across the enterprise. Further, it takes a process-oriented view of the organization and uses that view to allow the organization to meet its goals by tightly integrating all aspects of the organization. With ERP software, a company can better integrate its entire supply chain, automate many of its processes, and reduce its lead times and exceptions to the process along the way.

CRM is the process of finding, getting, and retaining customers. It encompasses the methodologies, strategies, and other capabilities that help a company or enterprise organize and manage its customer relationships, as well as the software tools to help achieve those ends. Today, many companies focus on the wants and needs of the customer, so the ability to track information about the customer, learn from that information, and use that information to better serve the customer is crucial. CRM helps a company learn what works and what does not. It helps the company identify the profile of the most profitable customers, gain a deeper understanding of the most and least profitable customers, and will allow the company to target the most profitable customer profile when it is searching for new business. For companies that are forming alliances with business partners, CRM is centralizing information on the customer base in a way that can be shared between partners to help to create products to better serve the end user. Before, customer-centric information was likely already stored within the company. It was unlikely, however, that this information was stored in a central location or that it was easily accessible by multiple departments therefore reporting on customer information in an enterprisewide manner was nearly impossible. If it is difficult to report on, it is likely nearly impossible to perform analysis on.

CRM will help your customer base, and your reputation within that base, by allowing faster response to customer's inquiries because the information is centrally stored and accessible by the people who are interfacing with the customer

source : Oracle E-Business Suite

Monday, January 19, 2009

Enterprise Resource Planning (ERP)

Sistem ERP adalah sebuah terminologi yang secara de facto adalah aplikasi
yang dapat mendukung transaksi atau operasi sehari-hari yang berhubungan
dengan pengelolaan sumber daya sebuah perusahaan, seperti dana, manusia,
mesin, suku cadang, waktu, material dan kapasitas.

Sistem ERP dibagi atas beberapa sub-sistem yaitu sistem Financial, sistem
Distribusi, sistem Manufaktur, sistem Maintenance dan sistem Human Resource.

Untuk mengetahui bagaimana sistem ERP dapat membantu sistem operasi bisnis
kita, mari kita perhatikan suatu kasus kecil seperti di bawah ini:

Katakanlah kita menerima order untuk 100 unit Produk A. Sistem ERP akan
membantu kita menghitung berapa yang dapat diproduksi berdasarkan segala
keterbatasan sumber daya yang ada pada kita saat ini. Apabila sumber daya
tersebut tidak mencukupi, sistem ERP dapat menghitung berapa lagi sumberdaya
yang diperlukan, sekaligus membantu kita dalam proses pengadaannya. Ketika
hendak mendistribusikan hasil produksi, sistem ERP juga dapat menentukan
cara pemuatan dan pengangkutan yang optimal kepada tujuan yang ditentukan
pelanggan. Dalam proses ini, tentunya segala aspek yang berhubungan dengan
keuangan akan tercatat dalam sistem ERP tersebut termasuk menghitung berapa
biaya produksi dari 100 unit tersebut.

Dapat kita lihat bahwa data atau transaksi yang dicatat pada satu
fungsi/bagian sering digunakan oleh fungsi/bagian yang lain. Misalnya daftar
produk bisa dipakai oleh bagian pembelian, bagian perbekalan, bagian
produksi, bagian gudang, bagian pengangkutan, bagian keuangan dan
sebagainya. Oleh karena itu, unsur 'integrasi' itu sangat penting dan
merupakan tantangan besar bagi vendor vendor sistem ERP.

Pada prinsipnya, dengan sistem ERP sebuah industri dapat dijalankan secara
optimal dan dapat mengurangi biaya-biaya operasional yang tidak efisien
seperti biaya inventory (slow moving part, dll.), biaya kerugian akibat
'machine fault' dll. Dinegara-negara maju yang sudah didukung oleh
infrastruktur yang memadaipun, mereka sudah dapat menerapkan konsep JIT
(Just-In-Time). Di sini, segala sumberdaya untuk produksi benar-benar
disediakan hanya pada saat diperlukan (fast moving).
Termasuk juga penyedian suku cadang untuk maintenance, jadwal perbaikan
(service) untuk mencegah terjadinya machine fault, inventory, dsb.

Bagi industri yang memerlukan efisiensi dan komputerisasi dari segi
penjualan, maka ada tambahan bagi konsep ERP yang bernama Sales Force
Automation (SFA). Sistem ini merupakan suatu bagian
penting dari suatu rantai pengadaan (Supply Chain) ERP. Pada dasarnya, Sales
yang dilengkapi dengan SFA dapat bekerja lebih efisien karena semua
informasi mengenai suatu pelanggan atau
produk yang dipasarkan ada di databasenya.

Khusus untuk industri yang bersifat assemble-to-order atau make-to-order
seperti industri pesawat, perkapalan, automobil, truk dan industri berat
lainnya, sistem ERP dapat juga dilengkapi dengan Sales Configuration System
(SCS). Dengan SCS, Sales dapat memberikan penawaran serta proposal yang
dilengkapi dengan gambar, spesifikasi, harga berdasarkan keinginan/pesanan
pelanggan. Misalnya saja seorang calon pelanggan menelpon untuk mendapatkan
tawaran sebuah mobil dengan berbagai kombinasi yang mencakup warna biru,
roda racing, mesin V6 dengan spoiler sport dan lain-lain. Dengan SCS, Sales
dapat menberikan harga mobil dengan kombinasi tersebut pada saat itu juga.

Sistem ERP dirancang berdasarkan proses bisnis yang dianggap 'best practice'
proses umum yang paling layak di tiru. Misalnya, bagaimana proses umum
yang sebenarnya berlaku untuk pembelian (purchasing), penyusunan stok di
gudang dan sebagainya.

Untuk mendapatkan manfaat yang sebesar-besarnya dari sistem ERP, maka
industri kita juga haurs mengikuti 'best practice process' (proses umum
terbaik) yang berlaku. Disini banyak timbul masalah dan tantangan bagi
industri kita di Indonesia. Tantangannya misalnya, bagaimana merubah proses
kerja kita menjadi sesuai dengan proses kerja yang dihendaki oleh sistem
ERP, atau, merubah sistem ERP untuk menyesuaikan proses kerja kita.

Proses penyesuaian itu sering disebut sebagai proses Implementasi. Jika
dalam kegiatan implementasi diperlukan perubahan proses kerja yang cukup
mendasar, maka perusahaan ini harus melakukan Business Process Reengineering
(BPR) yang dapat memakan waktu berbulan bulan.

Sebagai kesimpulan, sistem ERP adalah paket software yang sangat dibutuhkan
untuk mengelola sebuah industri secara efisien dan produktif. Secara de
facto, sistem ERP harus menyentuh segala aspek sumber daya perusahaan yaitu
dana, manusia, waktu, material dan kapasitas. Untuk lebih meningkatkan kemapuan
Sistem ERP perlu ditambah modul CRM, SRM, PLM dan juga Project Management.
Karena sistem ERP dirancang dengan suatu proses kerja 'best practice',
maka hal ini merupakan tantangan implementor ERP untuk melakukan implementasi
sistem ERP di suatu perusahaan.

Modul-modul Enterprise Resource Planning (ERP) Systems :

1. Item Master Management (IMM)
2. Bill Of Material (BOM)
3. Demand Management (DM)
4. Sales and Order Management (SOM)
5. Master Production Scheduling (MPS)
6. Material Requirements Planning (MRP)
7. Capacity Requirement Planning
8. Inventory Mangement (INV)
9. Shop Floor Control (SFC)
10. Purchasing Management (PUR)
11. General Ledger (GL)
12. Account Payable (AP)
13. Account Receivable (AR)
14. Cost Control (CO)
15. Financial Reporting (FIR)

sumber : http://www.erpweaver.com/

Developing a Knowledge Management Platform for Automotive Build-To-Order Production Network

Modern global companies have to build a supply chain network strategy that provides maximum flexibility and can optimally respond to changes in their environment. The emergence of automotive build-to-order production networks is one of the consequences of these changes in the automotive industry. Production networks can be seen as a step beyond the linear supply chain topography. However, when dealing with multiple organizations and multiple processes within a complicated production network, identifying and locating a member that has responsibility and/or competence in a particular part of the network can be a laborious, time-consuming process. Developing and maintaining a competence directory of all the relevant parties associated with troubleshooting and potential problem solving can significantly reduce the production lead time. Moreover, linking this directory to key decision points and frequent problems can further enhance its effectiveness. Consequently, the problem of semantic interoperability between members of such organizations is of major importance. Developing a knowledge management platform for automotive build-to-order production network proposes an approach to developing a knowledge management platform for a build-to-order production network to solve the above problem. The approach is based on application of such technologies as ontology management, context management and profiling.

Sourece

Knowledge creation in a supply chain

Knowledge creation in a supply chain aims to analyze how organizational conditions, technology adoption, supplier relationship management and customer relationship management affect knowledge creation through socialization-externalization combination, internalization (SECI) modes, and various ba, as proposed by Nonaka and Konno, in a supply chain. A qualitative inquiry with thematic analysis, which focuses on a thin film transistor-liquid crystal display (TFT-LCD) panel manufacturer and an integrated circuits (IC) packaging and testing manufacturer, is presented in order to identify how these key factors affect knowledge creation in a supply chain environment through the SECI modes and ba. The results show that these critical factors facilitate different types of knowledge conversion process in order to achieve successful knowledge creation in a supply chain. Knowledge of the significant factors that were found in this study may be applicable to countries or areas such as Hong Kong, Korea, Singapore or other developing countries, whose dominant businesses are similar to the original equipment manufacturers)/original design manufacturers in Taiwan. This paper considers the case study only as one empirical illustration of many other possible implementation processes. The study does not assume that these companies are a paradigm or that the specific situation is applicable to all other business enterprises. Future researchers interested in this field are therefore encouraged to triangulate its findings by examining variables generated from this study.

Source : Chuni Wu

How do Suppliers Benefit From Information Technology Use in Supply Chain Relationship?

Supply chain management systems (SCMS) championed by network leaders in their supplier networks are now ubiquitous. While prior studies have examined the benefits to network leaders from these systems, little attention has been paid to the benefits to supplier firms. It proposes that two patterns of SCMS use by suppliers-exploitation and exploration-create contexts for suppliers to make relationship-specific investments in business processes and domain knowledge. These, in turn, enable suppliers to both create value and retain a portion of the value created by the use of these systems in interfirm relationships. Data from 131 suppliers using an SCMS implemented by one large retailer support hypotheses that relationship-specific intangible investments play a mediating role linking SCMS use to benefits. Evidence that patterns of information technology use are significant determinants of relationship-specific investments in business processes and domain expertise provides a finer-grained explanation of the logic of IT-enabled electronic integration. The results support the vendors-to-partners thesis that IT deployments in supply chains lead to closer buyer-supplier relationships (Bakos and Brynjyolfsson 1993). The results also suggest the complementarity of the transaction-cost and resource-based views, elaborating the logic by which specialized assets can also be strategic assets.

Source : Mani Subramani

Sharing Global Supply Chain Knowledge

There are two categories of supply chain partners: those that buy and those that sell. Depending on which group they identify with, managers have different perspectives on the value of sharing critical knowledge resources with their supply chain partners. Both groups agree that sharing knowledge makes for more efficient supply chains (with lower costs and quicker speeds) and more effective organizations (with higher quality outputs and enhanced customer service). But the benefits of knowledge sharing don't always accrue equally or simultaneously to all participants. What type of information or knowledge should suppliers and buyers share with each other? How does knowledge sharing provide value to buyers and suppliers, and under what circumstances can it help both? How do cross-cultural differences between global buyers and suppliers influence the value of sharing information? To answer these questions, we studied more than 100 cross-national supply chain partnerships in the industrial chemicals, consumer durables, industrial packaging, toy and apparel industries in 19 country locations. We examined how different types of knowledge sharing can benefit buyers or sellers individually. But more importantly, we studied how knowledge sharing can enhance the performance of partnerships and build stronger supply chains in the global marketplace

Source : Matthew B Myers, Mee-Shew Cheung

Support Leaders in a Modern Workplace

Strengthening the internal communication function is a timely subject when increasingly, in my view at least, it faces some serious challenges. The two most obvious ones are the current workplace revolution we're all experiencing as we move from an industrial society to an information society, and the impact of ever-changing technology.

These two mega trends are affecting our work and the role of both senior business leaders and communication leaders in unprecedented ways.

These trends mean that senior leaders can no longer presume that "communication merely happens" by virtue of people working together to achieve agreed-on results. They also mean that senior leaders must take a much more active communication role than ever before to help explain the change and to put communication technology in its proper place, as a tool rather than a panacea.

I've been so struck by these trends that I've written a new book called The Credible Company: Communicating with Today's Skeptical Workforce.1

How to break the barrier of skepticism

My primary thesis is twofold. First, that senior leaders must acknowledge that today's turbulent change has turned their relationship with their employees upside down. Those employees, who are the means of doing business in the information age, are increasingly skeptical, if not cynical about the communication they receive at work.

That means that for company leaders not only to be credible but also to focus on improved company performance in today's global marketplace, they need to understand the changed relationship and design communication strategies that break through the skepticism that gets in the way of performance.

What they, and we as communicators, need is a robust strategic prescription that's addressed to these new issues.

Technology won't solve all problems

Second, I'm increasingly persuaded that our current love affair with technology as a seductive end rather than as a remarkable means has placed our profession at a significant crossroads.

Will we be mere purveyors of information without regard to outcomes? Will we focus on craft at the expense of balanced communication strategy? Craft is a tempting and easier alternative to the more difficult role of advocacy, education and the greater humanizing of organizational practices and values.

If they truly want to strengthen the communication process, senior leaders need to insist on a robust strategy that sees information as the raw material for today's intellectual assembly line and that recognizes the needs of the audience as the starting point and the first cause in creating that strategy. They need to recognize that face-to-face communication is the antidote to impersonal digital delivery of information and that openness is the antidote to skepticism.

They also need to insist on careful research and data gathering to identify the specific, changing needs of their own workforces and to use the marketplace as the means of rationalizing company strategy in response to the marketplace.

Finally, they must combine all of this into a communication strategy that drives the goals and objectives of the organization.

Help leaders change their view

This may be a tall order. But if we continue on our current path of using technology primarily to deliver mountains of raw information, we run the risk of "dumbing down" the workforce further and increasing the skepticism that already exists.

We need institutional leaders with the courage to tell the truth as it happens, to admit their own confusion in a complex world, and to listen intently to the needs and anxieties of their followers. We also need local interpretation by people on the ground-managers, supervisors and team leaders who understand local needs and who can translate how larger forces and events affect their teams and what they must do to adjust.

What leaders must change is their view that communication just happens in a well-run organization. Instead, they must recognize the need to make it a deliberate and accountable system, like all of the other systems and processes in the organization. The credible company and the credible leader alike will understand that if they wish to strengthen the communication process, they will demand communication strategies that efficiently move human energy in pursuit of worthy goals.

Source : Roger D'Aprix

Saturday, January 10, 2009

Productivity

Industrial Engineering has traditionally concentrated on reducing direct labor costs. When industrial engineering came into existence, direct labor was typically a major cost of doing business in manufacturing, and indirect and overhead cost were often minimal. Today it is just the opposite;it is uncommon today for direct labor costs to represent 5 precent of manufactured cost and for indirect and for overhead costs to represent 50 to 80 percent of manufactured cost. Industrial Engineering emphasis and approaches have been slow in adjusting to this change in cost ratios.
Lehrer, ahead of this time, foresaw the need for a concentration of effort in productivity improvement, a concept called "continuous improvement" today. Much of the attention given today to productivity and quality improvement is consistent with his message of a quarter-century ago.
One of few recent industrial engineering text that address the important issue of successful implementation. The client change process they proposes identifies an important requirement of successful practise in industrial engineering-that a practitioner must approach a client in a way that will successfully change the client's behavior (industrial engineering is in the behavior modification business). Industrial engineering to date has been remiss in generally ignoring this requirement of successful pratice. More attention needs to be given to such aspects if our goal is to enable future industrial engineers to maximizes their acomplishments.
In the future industrial engineering becomes increasingly practiced in work team and management consensus environments. Industrial Engineers will need to be trained to both lead and function within team efforts of the future.
Block makes an excellent case for the need for all practicing industrial engineers tipe acquire three sets of required skill: technical, interpersonal, and consulting. All practicing industrial engineers are consultants. They are almost always trying to convince someone to do something; they basically make recommendations for a living. They need to learn how to do so effectively.

Friday, January 9, 2009

Transportation Scheduling of Palm Fresh Fruit Bunches (FFB) Using Linear Programming

Bin System is a system to get fresh fruit bunches transported. TBS transportation at time of low crops by using one Prime Mover to one division will make idle time of Prime Mover of very high. Lessened Prime Movers become three;it's expected idle time of Prime Mover decrease. Decreased Prime Mover will raise a delay of bin to be transported, so it is require a scheduling where idle time of Prime Mover and delay time of bin to be transported minimum.

This scheduling which is finished by using Linear Programming principle use assignation model. The reason for using that method is it can develop scheduling which idle time of Prime Mover and time delay of bin to be transported are minimum. Before conducting of scheduling, it is important to know the optimal amount of bin as well as queue analyses to know do queue in weighing-bridge influence fluency of transportation process.
The result of scheduling shows that mean idle of total time for each Prime Mover each day equal to 111 minute, mean time delay of TBS to be transported is equal to 41,8 minute and total of time required to finish the transportation for each division is 327,5 minute up to 566,3 minute.
Transportation used by Libo Transport need amount of trip PM as according to requirement of trip at selected division. While this scheduling, amount of trip conducted for each PM do not depend on requirement of trip at one particular selected division, but the amount of required trip from entire divisions. So that conducted trip each PM tend to will be the same (8trip per day).
Comparison result between transportation systems used by Libo Transport with this transportation scheduling can be concluded that at the time of low crop amount of bin and required Prime Mover can be lessen. Reduction of one Prime Mover will cause arise of delays of bin to be transported. If delay of TBS counted since TBS reside in TPH up to unloading in loading ramp hence delay of TBS the biggest becouse of existence of delay of bin to be transported is equal to 148 minute added again with trip time, queuing up and unloading in loading ramp equal to 26 minute, is so that got total of time equal to 228 minute. the delay admit able to tolerate because the delay less then 8 hour.

Thursday, January 8, 2009

Total Quality Management

Total Quality Management is a management philosophy concerned with broad-based, continuous quality management throughout an organization. TQM is also difficult to define, which is typical of an emerging subdiscipline. Quality has been increasingly popular stated objective of organizations during the past half-century.
Inspection came first and then quality control (QC), quality assurance (QA), product assurance (PA), total quality control (TQC), and finally total quality management (TQM).

Statistical Process Control (SPC)-the application of statistics to quality control.

Quality Assurance-quality control, but including sufficient process analysis and administrations are met.

Product Assurance-quality assurance that recognizes the need for and extends the search for quality solution is production the product design, which is the primary source of most quality problems.

Total Quality Control-product assurance, including a commitment to continuous improvement employing participative management, and making the line organization and supplier primaliry responsible for quality performance.

Total Quality Management-total quality control, including continuous review ad assessment of all operational and management policies, procedures, and practices, to perfect quality performance in meeting customer needs.

In each progression, the role of quality control was broadened to take on a greater responsibility. TQM includes any aspect of the product, process, product design, and the management and control systems that has any potential effect on product quality. TQM in fact has only modestly extended the technology of quality control. what it has done is stressed the need to examine how we do business and how we manage our operations. TQM is more revolution in management than a revolution in quality control.
Most of the original QC statistical techniques still apply, but they are employed in a more rational management context. Quality control is now a management practice of inclusion, whereas in years past it was a plant activity of restricted inclusion-only quality control personnel were permitted to do quality control. A few f the TQM concept are summarized below :

1. Adoption of kaizen (improvement) within our industrial culture.
2. Do it right the first time
3. Recognition that management is the problem and needs to be addressed.
4. Quality control is a production responsibility.
5. Discipline
6. Recognition that for the last decade in particular, while American management has been chasing "quality" as an end in itself, the Japanese industrial culture has been addressing quality and productivity improvement, and more recently, responsiveness.

There reason to believe that TQM is not just a passing "silver bullet" and will, in fact, become a continuous improvement element of our future industrial and government culture.

Wednesday, January 7, 2009

Markov decision models for the optimal maintenance of a production unit with an upstream buffer

We consider a manufacturing system in which a buffer has been placed between the input generator and the production unit. The input generator supplies at a constant rate the buffer with the raw material, which is pulled by the production unit. The pull-rate is greater than the input rate when the buffer is not empty. The two rates become equal as soon as the buffer is evacuated. The production unit deteriorates stochastically over time and the problem of its optimal preventive maintenance is considered. Under a suitable cost structure it is proved that the optimal average-cost policy for fixed buffer size is of control-limit type, if the repair times are geometrically distributed. Efficient Markov decision process solution algorithms that operate on the class of control-limit policies are developed, when the repair times are geometrical or follow a continuous distribution. The optimality of a control-limit policy is also proved when the production unit after the end of a maintenance remains idle until the buffer is filled up. Furthermore, numerical results are given for the optimal policy if it is permissible to leave the production unit idle whenever it is in operative condition

Source: A Pavitsos, EG Kyriakidis. Computers & Operations Research 2009

A Call for an "Asian Plaza"

From 1995 to early 2002. the dollar rose by a trade-weighted average of about 40 percent. Largely as a result, the U.S. current account deficit grew by an average of about $70 billion annually for ten years. It exceeded $800 billion and 6 percent of GDP in 2006. This posed, and continues to pose, two major consequences for the world economy.

The first was the risk of international financial instability and economic turndown. To finance both its current account deficit and its own large foreign investments, the United States had to attract about $7 billion of foreign capital every working day. Any significant shortfall from that level of foreign demand for dollars would drive the exchange rate down, and U.S. inflation and interest rates up. With the U.S. economy near full employment but already having slowed, the result would be stagflation at best and perhaps a nasty recession.

The current travails of the U.S. economy are clearly related to these imbalances. The huge inflow of foreign capital to fund the external deficits held interest rates down and contributed significantly to the housing bubble that triggered the financial crisis and economic turndown. The sizeable slide of the dollar has indeed added to price increases, notably of oil as the producing countries seek to counter the losses it causes for their purchasing power, and thus greatly complicates the management of monetary policy as it tries to prevent a recession. The world economy is also adversely affected through the impact on other countries, as their currencies rise and they experience significant reductions in the trade surpluses on which their growth had come to depend.

Second is the domestic political risk of trade restrictions in the United States and thus disruption of the global trading system. Dollar overvaluation and the resulting external deficits are historically the most accurate leading indicators of U.S. protectionism because they drastically alter the domestic politics of the issue, adding to the pressures to enact new distortions and weakening pro-trade forces. These traditional factors are particularly toxic in the current context of strong anti-globalization sentiments and economic weakness. The spate of administrative actions against China over the past several years, and the numerous anti-China bills now under active consideration by the Congress, demonstrate the point graphically since China is by far the largest surplus country and its currency is so dramatically undervalued.

The U.S. current account deficit does not have to be eliminated. It needed to be cut roughly in half, however, to stabilize the ratio of U.S. foreign debt to GDP. That ratio was on an explosive path, which would have exceeded 50 percent within the next few years and an unprecedented 80 percent or so in ten. Avoiding such outcomes required improvement of about $400 billion from the levels reached by 2006.

I and colleagues at our Peterson Institute for International Economics have been pointing to these dangers since the end of the 1990s, and calling for corrective action that would include a very large decline in the exchange rate of the dollar. We were confident that such a decline would, as in the past, produce a substantial turnaround in the U.S. external position and it is now doing so. The current account deficit has fallen by more than $100 billion and is likely to drop by another $100 billion or so over the next couple of years. The fall of the dollar by 25-35 percent over the past six years, depending on which index is used, has sharply increased the international competitiveness of the U.S. economy. Exports have been growing at more than 8 percent annually for the past four years and by about 1 2 percent for the last two. Especially with the recent slowdown in U.S. growth, they are now expanding four times as fast as imports.

The internal corollary is of course that U.S. domestic demand, initially residential investment but now also con- sumption, is rising more slowly than output. This inevitable reversal, after a decade in which internal demand climbed more sharply than production, means that the improving trade balance is cushioning the aggregate U.S. slowdown to an important extent. We are in fact experiencing the first episode of "reverse coupling," through which the rest of the world continues to expand and pulls up the United States rather than being devastated by its turndown. This is an early indication of the shift in global economic weights, with the rapidly growing emerging markets now accounting for almost half of world output, as well as a timely unwinding of the chief global imbalance of the early twenty-first century.

THE CURRENT AGENDA

Even on this modestly optimistic prognosis, however, the U.S. deficit will remain too large. The dollar needs to fall by another 5-10 percent to cut the imbalance to a sustainable 3 percent of GDP. This is one of the three key factors that underlie the current set of imbalances that should now be addressed by global economic policy.

The second factor is the continuing surge of China's global current account surplus. That imbalance reached about $400 billion in 2007 and, while growing more slowly in the future, is likely to reach $500 billion by next year. It will thus be almost as large as America's global current account deficit in absolute terms in an economy about one-third the size of the United States. The surplus exceeds 10 percent of China's GDP, an unprecedented level for the world's largest exporting nation. The Chinese authorities have let their currency rise more rapidly against the dollar over the past few months, and continued appreciation at that pace for another two or three years could cut their surplus to a manageable level, but the renminbi has still not climbed at all against a trade- weighted average of the currencies of its main trading partners since the dollar peaked in early 2002 and its own surpluses started to climb.

The third factor is the creation of the euro, which provides a real international monetary rival for the dollar for the first time in almost a century. The dollar has been the world's dominant currency since the abdication of sterling, around the time of the First World War, primarily because it had no competition. No other currency was based on an economy and financial system that even approached the size of the U.S. economy or its capital markets, and thus none could even begin to challenge the dollar in international finance. Former German Chancellor Helmut Schmidt used to assert correctly that the deutschemark, the world's second leading currency for most of the postwar period, could never rival the dollar because "West Germany was the size of Oregon."

The creation of the euro changes all that. The European Union as a whole, and even the slightly smaller Euroland, has an economy about as large as the United States and exceeds U.S. levels of both external trade and monetary reserves. The euro has already outstripped the dollar in terms of currency holdings around the world and denomination of private bond flotations.

The dollar will obviously remain a major international currency and it may be some time before the euro overtakes it, if it ever does so. But we should expect a steady and sizable portfolio diversification from dollars into euros as private investors, central banks, and sovereign wealth funds seek to align the currency composition of their assets with the new structure of the world economy and global finance. One result will be steady upward pressure on the euro, and downward pressure on the dollar, in the exchange markets over the longer run. A somewhat similar portfolio adjustment took place from yen into dollars during the early 1980s, after Japan finally lifted its controls on capital outflows, adding substantially to the upward pressure on the dollar during that period.

A PROPOSED RESPONSE

The result of these developments is a series of imbalances, some old and some potentially new, that create major risks for the world economy, international financial stability, and the trading system (due to the protectionist impact of large currency overvaluations). They call for urgent new policy initiatives by the G7, the International Monetary Fund, and probably new groupings of key countries that reflect the rapidly evolving power structure of the global economy.

First, there is now a substantial risk of a free fall of the dollar. Its sizable depreciation over the past six years has been gradual and orderly, and it is approaching an equilibrium level. As often happens in the last stages of a major currency swing, however, like the dollar's upward overshoot in 1 984-85 and downward overshoot in 1995, that decline could now accelerate.

Both growth differentials and interest rate differentials have moved sharply against the dollar and are likely to continue doing so for a while. As noted, the current account imbalances remain too large and the maturation of the euro creates an additional incentive for shifts out of the dollar. Perhaps even more importantly, the acute slowdown in U.S. productivity growth undercuts the chief rationale for the strong dollar of the second half of the 1990s, and the advent of stagflation conjures up images of the 1970s, which witnessed three sharp dollar declines - including in 1978-79 its closest approximation to date of a "hard landing."

The G7, in conjunction with the major Asian economies, thus needs to be ready with a contingency intervention plan to limit the pace (and perhaps extent) of dollar decline if a free fall begins to eventuate. They should not seek to block the further realignment of exchange rates that is needed to complete the adjustment process, especially against the Asian currencies as elaborated below. However, dollar depreciation of excessive speed and magnitude could exacerbate the present economic weaknesses in both deficit and surplus countries: raising inflation and interest rates in the United States, perhaps sharply, and weakening export and overall growth in Europe, Canada. Australia, and others. In the present fragile environment, it could also ignite another round of global financial turmoil. The results could be sufficiently severe to tip the current global slowdown into a world recession.

It should in fact be simple for the G7 along with the key Asians, most of whom are already intervening substantially, to agree to moderate the pace and amplitude of the dollar's final decline. One would indeed assume that the needed contingency plans have already been prepared. However, the failure of these same countries to anticipate and respond cooperatively to the current financial crisis generates little confidence in their ability to work together even when the benefits of doing so are blindingly obvious. A new initiative on this front, orchestrated particularly by the United States and Euroland as the issuers of the world's two key currencies, is likely to be necessary.

Second, the remaining decline of the dollar needs to be steered in geographically appropriate directions. It should take place, wholly or very largely, against the renminbi and the currencies of other Asian countries along with a number of oil exporters. These countries are running most of the counterpart surpluses to the U.S. deficit, and piling up massive foreign exchange reserves, and the International Monetary Fund has recently certified that most of them enjoy the option of expanding domestic demand to offset the adverse growth impact of declining trade surpluses.

If these surplus countries continue to resist significant appreciation of their exchange rates, the counterparties to the dollar decline will be the currencies (mainly of Euroland, the United Kingdom, Canada, and Australia) that have already risen substantially and whose countries are not running substantial (if any) surpluses. The result would be the creation of sizable new imbalances that would produce new problems for the world economy and, due to the protectionist impact of large currency overvaluations, for the already-beleaguered global trading system. In the meantime, further increases in the Chinese (and other Asian and oil producer) surpluses, coupled with the declining U.S. deficit, will place considerable pressure on the trade positions and growth prospects of the rest of the world.

There is no effective monetary coordination, or even cooperation, among the Asian economies despite their Chiang Mai Initiative and the swap agreements that they have arranged over the past few years. Hence any individual Asian country understandably fears that permitting its own currency to appreciate unilaterally could undercut its position against its neighbors and chief competitors, as has in fact happened to Korea since it let the won rise sharply. This collective action problem can be solved only by an Asian Plaza Agreement, or some informal equivalent, through which the main countries in the region agree to let all their currencies rise more or less in tandem with the renminbi once it is permitted to strengthen substantially. Such an agreement would make an important difference: if all the major East Asian currencies (including the yen) moved together, they would climb by trade-weighted averages of a very manageable 12-15 percent each even if they all appreciated by 30 percent against the dollar.

The International Monetary Fund should take the lead in forging such an "Asian Plaza." Now that Euroland has joined the United States in sharply criticizing China's huge surpluses and massive intervention to limit the rise of the renminbi, and especially as Asian and developing countries such as India and Mexico have expressed similar alarms, the International Monetary Fund should be able to forge a sufficient consensus to do the Asians the great favor of enabling them to act together on this issue. A dividend for the International Monetary Fund should be enhanced status in Asia and thus a major deterrent to any future consideration of a rival Asian Monetary Fund.

The third needed initiative would reinforce the first two but address as well the secular impact of the advent of the euro as a global key currency: creation of a Substitution Account at the International Monetary Fund to avoid some of the exchange-rate impact of dollar diversification by providing an off-market alternative for its realization. Such an account, which was actively negotiated and almost came into being during an earlier bout of dollar diversification in the late 1970s, would accept unwanted dollars from official holders in return for Special Drawing Rights at the Fund. The investors in the account would receive a widely diversified and highly liquid asset with a market interest rate while protecting the value of their (very large) remaining dollar assets. The Euroland countries would avoid additional appreciation of their currency. The United States would avoid excessive weakness of the dollar. The International Monetary Fund would gain a new lease on life.

Such an initiative should thus have widespread appeal and all parties should be willing to use part of the International Monetary Fund's large gold holdings to protect the account against valuation losses if the dollar were to fall further in the future, which was the chief sticking point during the previous negotiation. Since the dollar is probably near its lows, at least for a considerable time, a rebound that would instead generate sizable profits for the Substitution Account over the next decade or so is in fact more likely - as would have occurred had it been agreed in 1980.

CONCLUSION

The partial and continuing correction of the world's previously dominant imbalance, the U.S. current account deficit, highlights and indeed exposes several other actual or potential imbalances that pose major risks and must now join it at the forefront of the global policy agenda: avoidance of a free fall of the dollar, correction of the huge Chinese surplus (and other Asian and oil surpluses), the related prevention of a building of new deficits in Europe and other areas where currency appreciation may go too far, and the exchange rate impact of the advent of the euro as a global rival to the dollar.

Different groups should take the lead in addressing each of these problems. The United States and Euroland should devise the contingency plans to counter a free fall of the dollar against the euro, and spur the initial negotiations to create a Substitution Account to limit the market impact of diversification from dollars to euros. The Asians should work out a coordinated realignment of their currencies against the dollar. The International Monetary Fund is the chief institution to implement most of these plans.

This would also be an ideal agenda for the "new G5" recently created by the International Monetary Fund to conduct its revived multilateral surveillance program. The new group includes China and Saudi Arabia, for the oil exporters, as well as the United States, Euroland, and Japan. It could seize the moment to replace the G7 as the key steering committee for the world economy, greatly strengthening the position of the convening International Monetary Fund in the process. A failure to pursue all three components of the strategy will leave the world at substantial risk in the period ahead and deepen the threats to the world economy that are posed by the current financial crisis.

Source: C Fred Bergsten, The International Economy 2008

5 Steps to Creating a Forecast

A sea captain would not try to sail the Atlantic on a cloudy day without a navigational system. Likewise, a healthcare financial manager should always use a reliable forecasting system when evaluating his or her organization's performance. Simple average calculations or untested business targets that provide general direction aren't enough; financial managers need to be able to identify potential challenges, just as a navigation system would identify open-sea hazards. Yet forecasting is not an idea that should be dismissed out of fear of complex spreadsheets and mystical practices of statisticians and PhDs.

Just as ship captains today use powerful navigational systems to plot, monitor, and adjust their voyage, healthcare financial managers can leverage advanced forecasting techniques to better plan, manage, and adjust the decisions that will help them achieve their performance goals. Although an introduction to forecasting warrants a much larger discussion, this article outlines the major steps and considerations that a healthcare manager should address when embarking upon forecasting.

Step 1. Establish the Business Need

Healthcare financial managers need to clearly understand how their forecast will influence business planning and decisions within their organization. Without this important understanding, the resulting effort will very likely produce adverse results. For example, many business managers rely on monthly cash forecasts. These are used by collections managers for setting monthly cash collection goals, by finance to schedule capital expenditures for clinical equipment, and by staffing managers for their budgets.

Those are just a few examples; actually every employee walking through the halls of a hospital is a knowing or unknowing stakeholder in a forecast. Imagine if the cash forecast wasn't in synch with business expenses-the results on reserves could be disastrous. To establish the business need, these key questions should be answered:

* What decisions will the forecast influence?

* Who are the key stakeholders?

* What metrics are needed and at what level of detail?

* How far forward should the forecast project in terms of years, months, weeks, or days?

* How will accuracy be measured, and what is the acceptable level?

* What is the impact of under- and overcasting?

The results of forecasting a metric, whether they include revenue, patient visits, or uninsured bad debt, are always needed to support many organizational decisions. To best answer the above questions, the healthcare financial manager needs to ensure that the forecasting efforts meet organizational needs.

Once these questions have been answered, it is important to identify the potential drivers of a forecast. For example, gross revenue is driven by a number of factors throughout the organization from clinical to financial to strategic. Furthermore, each driver can be grouped into internal factors and external (exogenous) factors.

Healthcare organizations can find internal factors like those shown for key hospital revenue drivers in the sidebar above quantified within their own data. As healthcare organizations adopt more sophisticated techniques for forecasting, they should advance their models to consider key market and strategic business influences like those shown in the sidebar below.

Step 2. Acquire Data

For each business driver and influencing factor, the typical forecasting effort should use at least two years, and ideally up to five years, of historical data. When forecasting efforts have short time horizons in small time periods, fewer data can be used. To collect the most accurate and robust data sets, all available data sources, such as multiple healthcare information systems (HIS), spreadsheets, small departmental databases, and/or an enterprise data warehouse, should be used. By sourcing from multiple areas, differences in organizational behavior can be balanced out to yield the best data set.

All data should be drawn incrementally in their pure form from available data sources to build up the needed accuracy and completeness. To ensure the richest representation of historical events, the data should not be altered and quality issues should be addressed sooner in the process rather than later. A common challenge a hospital may face in forecasting is the practice of purging of aging trial balance data from an HIS after one or two years. This common practice makes accurate forecasting very difficult. Hospital financial managers need to review purging policies or acquire tools to ensure historical data are available for budgeting and planning.

Collecting exogenous data often requires involving third-party data sources. Several potential sources for external data exist in free public sources, such as census data, the Centers for Disease Control and Prevention, and the Centers for Medicare and Medicaid Services (CMS), as well as private information firms. Financial managers need to find the optimal source that can provide reliable, high-quality data that can be incorporated into their data structures. In fact, incorporating third-party data can provide valuable benchmarks later in the analysis.

Once the data are collected, it is important to ensure they are clean. Cleaning data often requires more effort than developing the forecast. One of the most efficient ways to perform this process is to visualize the data in trend, distribution, and scatter graphs to find anomalies. This review should be conducted for each toplevel metric and major subgrouping as well as all driving factors to help identify:

* Missing values and gaps: Missing values can be caused by HIS purging policies, changes to business processes and practices, and switching patient access or billing systems. Gaps in valuable historical data can limit forecasting accuracy.

* Outliers: These are business events that may skew a forecast, such as how Hurricane Katrina affected healthcare facilities in Louisiana and Mississippi. Rather than manually looking through all possible slices of data for outliers, it is best to identify and look for specific events that may be unique to an organization, coupled with tools that automatically search data to find outliers.

Examples of these potential issues are shown in the graph, below. Once found, there are many well-known methods for addressing missing data and outlying data points, including statistical methods such as nearest neighbor or forecasted value, or by adjusting the data manually to a known true outcome.

Step 3. Build the Model

Once the business needs, drivers, and influencing factors have been established with the associated historical data, a decision needs to be made on the type of forecasting model to use. The forecasting model is the technique or algorithm that determines the projections based on identified business drivers, influencing factors, and business constraints. There are three major categories of forecasting models: cause-and-effect, time series, and judgment.

Many more forecasting models are also available, and there is no overall best choice. In fact, forecasting models are often combined to produce the most accurate results for a given business need, and it may be necessary to consult with business and technical experts for advice when selecting the best model for a given situation.

As an example, let's explore creating a hospital revenue forecast that has seasonal considerations. The graph at the top of page 104 depicts hospital revenue trending up and peaking at each year end. There are 4,8 months of historical data available, and the most recent five months of data have been selected to test and evaluate the model once it is built. This technique is referred to as training and testing the model.

An application of a cause-and-effect model that incorporates a seasonably and judgment factor is shown in the graph at the bottom of page 104. Seasonal variations can be attributable to a number of reasons, such as a hospital's being located near a ski resort during the ski season, or the "snowbird" effect, when a large population migrates south during the winter months. Clinical factors often have seasonal factors, with the most obvious being flu season. Because the healthcare financial manager in this example knew there were seasonal trends, and ample historical data were available, the model was constructed to include these influences.

Judgment variables can also be used to account for infrequent events such as employee strikes, acquisition or construction of a new acute care center, construction of a new wing, or construction of a new hospital in the area. For example, a healthcare financial manager can use the opening of a physical therapy center at an acute care hospital in 2006 to predict the effect of opening a physical therapy center at a different facility in 2008.

Step 4. Evaluate the Results

Once the model has been built and executed, the resulting forecast accuracy should be evaluated using the most recent time period. Overall model accuracy should be measured using tatistical functions such as F statistic, standard error of the estimate, or R^sup 2^. R^sup 2^ is a statistical measure used for regression models describing what percentage of the changes from month to month can be explained by the forecast. By visualizing the results as shown in the graph below, a healthcare manager can easily understand a model's accuracy.

Model accuracy should be tracked and monitored by calculating the difference from month to month. The accuracy rate may vary from month to month, but in any month, a forecast accuracy of more than 85 percent is considered to be very good. To compare forecast accuracy over time, the simple yet powerful mean absolute percentage error (MAPE) test should be used. If the MAPE increases over time, then not all influencing factors have been included in the model. By constantly keeping an eye on this test, a healthcare financial manager can easily understand whether the forecast model needs to be tuned.

Another powerful tool to test how a model will handle future conditions is scenario analysis. By running different scenarios, especially with known outcomes, healthcare managers can gain a comfort level of model behavior and accuracy. For example, a forecast for inpatient volume could test a base, a conservative, and an aggressive scenario for population growth rates, additions of new managed care populations and employer groups, and clinical initiatives to convert inpatients to outpatients. Under each scenario, the forecast should provide reasonable results.

Step 5. Apply the Forecast

Once all the work has been done to create a highquality forecast, it should be deployed to the stakeholders and end users in a manner tailored to their use. The forecast should ideally be made accessible to all appropriate business areas in reports and analyses packaged to unique enduser perspectives. For example, a contract manager is most interested in revenue forecasts by payer contracts. Each healthcare financial manager should have access to a "sandbox" area to perform "what-ifs" to better understand the impact of business decisions.

An often overlooked value of a forecast model is that it allows financial managers to better evaluate how the hospital is performing when controlled for external factors. When the forecast is for an increase in revenues for a particular month, and the actual performance is below both the forecast and the previous month's actuals, there may be a problem that needs to be resolved. For example, in the case of gross revenue, a hospital could control for the fact that it's the holiday season and no one wants to be in the hospital, or perhaps local employment is down, or people are moving out of the county. By letting the forecast adjust for factors beyond their control, financial managers can objectively judge how items they can control (such as quality of care, prices, and payer relationships) are performing.

A consistent challenge healthcare organizations face is identifying a problem after it has already damaged organization performance. To mitigate this problem, healthcare financial managers should put into place tools to check their forecasts on an ongoing basis so they will be aware of downward trends months before those trends affect the performance of the organization.

"Those WKo Fail to Plan, Plan to Fail"

Today's advanced forecasting techniques allow healthcare managers to plan, manage, and continually monitor where their organization is headed in the future. It is often said that "Those who fail to plan, plan to fail." Through advanced forecasting, healthcare financial managers can ensure that their organizations have a successful voyage to better performance.

Source: Doug Stark, David Mould, Alec Schweikert. Healthcare Financial Management, apr 2008