Siviko, which I established along with three partners, is an engineering company concentrated on solving complex manufacturing problems. Understanding our customers’ manufacturing processes, identifying correctly the issues, and offering solutions with maximal impact is of prime importance for the quality of our service. Therefore, for a long time, I have been interested in how manufacturing works and I have always been looking for opportunities to enhance this knowledge.
One such opportunity was Principles of Manufacturing (PoM) MicroMasters program offered by the Massachusetts Institute of Technology (MIT) and organized by its renowned Mechanical Engineering department. It is offered online on the educational platform edx.org. It is equal to one semester of the Master of Engineering in Advanced Manufacturing and Design degree that is offered on-site. I enrolled in the program in August 2019 have just completed it (June 2020). I am very happy with what I have learned and I would like to share my experience and key learning points. I think that people interested in manufacturing might find the information useful. In addition, some could even find the motivation to enroll in the PoM program as well.
What do Principles of Manufacturing mean? I think that this is an important question that has to be explained in detail, as this is the very reason why the PoM program was created. I will define both terms as they were explained in the program by the faculty.
First, let start by defining what a principle is. There are many definitions, but one of the best is that a principle is a basic truth that explains or controls how something happens or works.
Second, what is the definition of manufacturing is and how does it differ from other human activities? Manufacturing is production in volume in which there is a continuous flow of “new” materials, going through a system of unit processes. This is how it differs from artisanship or the production of single prototypes. Furthermore, it combines multiple supplier streams and has to create a product with minimal variation. Manufacturing is a massive undertaking and demands huge capital investment in the form of fixed assets, materials and components, and labor. The ultimate goal of any manufacturing unit is to meet customer demand and to be commercially viable over time.
Principles of Manufacturing, therefore, stands for the basic truths that explain how the complex world of manufacturing works. They are a set of elements common to all manufacturing industries that revolve around the concepts of flow and variations. All of them are analytical methods to model and control flow and variation at four levels of the enterprise:
Those principles are fundamental because they are applicable to any industry and support technological development at any scale and at any level of technological sophistication. Furthermore, they can help in all sorts of decision-making because they are based on data and analytics. They could be used to improve tremendously the operations no matter what the state of technology is.
The flow of materials through variable processes and systems has been around since the first industrial revolution. Even in the new era of Industry 4.0, this still holds true. No matter what the technology is, in our physical world there is always a flow of materials through variable processes and systems, where they are being transformed into the final product. In fact, one cannot do a proper transition into Industry 4.0 without knowing the principles. Technologies such as digitization, industrial control systems (SCADA or MES), artificial intelligence or machine learning work really well when the manufacturing process is well understood in detail. Then all the gathered and analyzed data makes sense and could be turned into informed decisions and actions. In order to have smart factories, one first needs smart people.
The faculty at MIT has defined four main attributes of manufacturing process performance, which are key in evaluating how well a manufacturing system performs:
The PoM program has eight online courses and four final exams, which represents the equivalent of one semester of coursework at MIT at a Master’s degree level. They provide a fundamental basis for understanding and focus on the analysis, characterization, and control of flow and variation at different levels of the enterprise through the following subject areas:
In the next sections, I will describe the key points in each of those courses.
2.1. Manufacturing Process Control I and II
The first part of the course is about modeling and controlling temporal and spatial variations in unit processes using statistical methods and mostly statistical process control (SPC). The SPC was first developed by Walter Shewhart in 1924 while working on an assignment to improve the voice clarity of the carbon transmitters in AT&T telephone handsets. AT&T was producing millions of telephone handsets and inspecting each produced unit was virtually impossible.
Shewhart’s observation was that most defects come from a variation in the manufacturing process. He observed that the flow of “new” materials through the processes has inherent random variation, which may result in variability in physical dimensions and properties. If those variations are within the targeted specification limits, then it is all right. If they are outside the targeted specification limits then the product is defective. For example, say that you have to machine a shaft with a diameter of 7mm with a tolerance of ±0.1mm. If the diameter is 7.1mm or 6.9mm, then it is within specification and it is acceptable. In contrast, if it is 6.89mm or 7.11mm, it is categorized as defective and has to be either re-worked or scrapped.
There are two major sources of variation – deterministic or random. The deterministic variation has a cause that can be identified and corrected. This could be an operator’s fault, tool wear, defective raw material, ambient climate, etc. However, there is also a purely random variation the source of which could not be determined or could not be corrected. If one could detect and correct the deterministic variation, he would end up with a purely random variation. This would mean that the process is in statistical control and keeping it in control reduces waste and improves quality.
When the process is in statistical control if we collect batches of data from the processed products and plot it, over time the distribution of the data will start resembling normal (also known as Gaussian). It will look just like this figure:
From the above figure, one could observe that 68.2% of all points should be within one standard deviation (noted with the Greek letter σ – sigma) of the mean, or that 95.8% should be within two standard deviations from the mean, etc. For example, if we take the previous example in which we measure a diameter of a shaft and say that its mean is exactly 7.00mm and the standard deviation is 0.05mm, then 68.2% of all measured points should fall between 6.95mm and 7.05mm.
Once we have determined the mean and the standard deviation of a stable process with no deterministic variation, we can start running statistical process control. In essence, it means that with each measurement recorded, we have to determine if what we see is still the same normal distribution or a deterministic variation is kicking in and it has changed. In other words, each time we record a new data point we have to determine if this what we expect to see based on the parameters of the known normal distribution or something has changed and the process is no longer in statistical control.
This is done with the help of a control chart like the one shown below. When the collected data points are plotted over time we get the following graphic:
The blue dots are what we expect to see if the data indeed follows the expected normal distribution. It should not resemble patterns, and it should be within the set control limits. If we start to observe unusual behaviour – for example, data points outside of the control limits or clear patterns – for example, 8 consecutive points in one direction, we have to stop the manufacturing process and examine if there is a deterministic cause for this variation.
The red dotted lines are the control limits. They are commonly set at ±3σ. Therefore, the probability that a point will fall on either side of the control limit is just 0.2%. It is usually a signal that the process is not in control any more. In this case, the production has to be stopped and examined for causes of variation.
SPC is a tool that is very useful for optimizing process quality over large flows when testing all produced parts is either too costly or impossible. It could help in detecting problems before they result in large quantities of defective products or even avert this happening altogether.
While the first part of the course was entirely dedicated to the topic of SPC and passive observation of the existing process, the second one was about active modeling and minimizing variation for the sake of process improvement. The course explains the most common techniques for determining causes of variations and different methods to achieve process optimization and robustness. I will not go in detail about this part of the course, as it is very technical, but it definitely gives a good glimpse of what one can do if we have data available from the manufacturing process.
For me personally, those two courses were the most difficult, as they involved a lot of statistics and mathematics. I have studied statistics 15 years ago, but I had to invest significant time in dusting-out my knowledge. I also had to learn how to perform some basic operations in Matlab. I will definitely use those skills, as in the meantime we have become an integrator of the SPC software by Sepasoft, which is a natural extension of our work with the Ignition platform. Understanding and appreciating randomness can be useful in other aspects of our work such as in the design of new machines and equipment or in calibrating certain automated processes. In one of our latest projects in which we created a machine for quality control through machine vision, we used such techniques to calibrate the machine.
The second course in the program is about modeling and controlling flows in manufacturing systems with stochastic (random) elements and inputs. The overarching theme is that flow in a factory system has variability that is caused by the unreliability of machines or equipment, variations of rate, setup times, inventory buffers sizes and levels, blockage of starvation of machines, etc. Factories are viewed as random dynamical systems. The focus of the course is on quantity – in particular how to meet goals for a specified quantity at a specified time. This course has fascinated me the most from the very beginning as it is naturally linked to the lean manufacturing concept. which I studied during my MSc in Management degrees.
It was a tough and rewarding course. The first part of the course sets up the theoretical basis for modeling manufacturing systems such as probability, stochastic processes, and queueing theory. Then it reviews some quantitative models of single-part-type flow lines with unreliable machines. It also explains the notion of bottlenecks, the influence of inventory buffer sizes on production rates, and the basic ideas that make the Toyota Production System so interesting. Eventually, when the mathematical models cannot handle all the complexity of the real-world complexity one can use simulations.
A textbook for this course was the book “The Goal: A Process of Ongoing Improvement” by Eliyahu M. Goldratt. This is one of the best business books that I have read and I highly recommend it. Mr. Goldratt is also famous as the inventor of the Theory of Constraints, which is a popular managerial concept. Furthermore, I also used simulation software called AnyLogic for the first time. You can try it as well – it has a free version. The course uses a lot of animations to explain the basic ideas, which makes it fun to watch.
A big part of the course revolved around unreliable machines put into a chain of manufacturing processes. Unreliable machine means that it can fail at times and stop producing with some mean-time-to-fail and mean-time-to-repair ratios. So the nominal production rate is decreased due to the unreliability.
Between each machine, there is an inventory buffer. When a machine produces a part, this part is stored in the inventory buffer downstream from it. If a buffer is full, then the machine is blocked and cannot produce anymore. If the machine upstream of it fails and the buffer upstream is empty, then the machine is starved and is forced to stop production.
A simple presentation of this chain looks like this:
There are at least two important takeaways from this setting that has practical implications.
Seldom in real life all machines in a chain have the same production rates or the same reliability levels. It means that there is always a machine that is either slower or less reliable than the rest in the chain. This machine is called a bottleneck (in the figure below it is Machine 2). One cannot produce more than the bottleneck can handle as it either blocks or starves the other machines in the chain. Therefore, one should strive to optimize the efficiency of the whole chain rather than to get 100% productivity at each separate machine. Otherwise, it will result in the accumulation of a lot of unused inventory and the overall productivity of the chain will not change at all. One should instead focus on the bottleneck and make sure that it is worked at its maximum – it should never be idle, it should never be blocked or starved and when it fails, it has to be repaired as fast as possible.
It is also useful guidance of future investments – the best returns come from investments in the bottleneck. When a bottleneck is no longer such, due to increased capacity or improved efficiency, usually a new one appears. Sometimes, the bottleneck might not even be physical, but a result of incorrect internal policies or procedures. The concept of bottlenecks is a powerful tool.
The inventory has a tremendous impact on productivity and is a very good indicator of what is happening in the manufacturing chain. It acts as a buffer between the unreliable machines and allows other machines in the chain to continue working, while one is down for repair. If there is no inventory buffer at all, it means that when a single machine fails all others in the chain will have to stop working as well. However, having inventory costs money and space. The higher productivity levels can be achieved with infinite buffer sizes, however, this is unpractical. Nevertheless, placing strategic inventory buffers of various sizes could have a tremendous impact on the productivity of the chain.
I cannot cover in detail all the whole course in this article. However, I can say that the course does an excellent introduction on how to design systems to maximize rate and minimize cost. If one is to apply those principles in the real world, having correct and up-to-date information from the production floor will be extremely useful. This explains all the talk about getting and analyzing data from the production floor in the last few years. For example, if one has an Overall Equipment Efficiency (OEE), Track and Trace, or SCADA system in place, such as Ignition, he could then use the gathered information to optimize the manufacturing system. Furthermore, the data could be used to create simulations. It is far cheaper and faster first to play around with simulations before committing huge amounts of time, labor and money in real-life projects.
We have also successfully applied the knowledge in a project of ours: Robot Palletizing Optimization. We identified the bottleneck, examine it, and eventually improved its performance significantly. The introduction of a buffer had a huge effect on the rate – we improved it by more than 75%!
The third part of the PoM is about operating and designing optimal manufacturing-centered supply chains. The emphasis is also once more on uncertainty, but this time in terms of demand, supply, and logistics. The chief task is that one has to manage his inventory and capacity so that he can achieve the level of service that he wants.
There are two major topics in the class – managing inventory and planning capacity and network design. The inventory could be modeled through many different methods. Some of them are basic, such as the newsvendor problem, while others such Economic order quantity model or the Guaranteed-Service Model is more advanced. They are widely used in the industry and the course provides examples from Procter & Gamble, HP, and Intel that have optimized their supply chains and inventory using such methods and of course software solutions.
The second topic of planning capacity and network design for me personally is more interesting. There is extensive use of optimization modeling and a huge part of it is done in spreadsheets with Excel Solver and Open Solver that are freely available plug-ins in Excel. Actually, one can use those tools for so many more tasks than modeling capacity and this is what I really like about them. Similar, but more complex and powerful solver tools have been used in the US companies for decades. I still have not heard of anyone using such tools in Bulgaria. Perhaps there is no such dire need as the companies are smaller and supply network designs and capacity planning is simpler. Or maybe I am just ignorant and such software is used as well. I do not know.
Another interesting topic in the course is how to design a network in order to achieve better flexibility. The total flexibility means that each machine or operator or factory (production unit) can produce all products. However, this is either too expensive to achieve or unrealistic. The course instead introduces a framework of evaluating the benefits of flexibility and it actually shows that limited flexibility, if deployed in the “right way”, can achieve the benefits of total flexibility. The “right way” is to design the network so, that a production unit could substitute at least one other production unit for a specific product. One has to create the longest possible looped chains, by which the production resources are linked to more and more products. In this way, one could dynamically shift production capacity for a product, which achieves almost the same benefits as in the total flexibility case.
Overall, it was a challenging course that involved a lot of modeling and calculations. Nevertheless, it was also very rewarding, as I understood how supply chain management is done in some of the world’s business leaders. I am grateful that the lecturers had very concise and understandable explanations of seemingly complex topics. This course would be very useful for professionals involved in inventory management, capacity planning, or supply chain management.
The last part of the PoM is about understanding the uses and flows of business information to start-up, scale-up, and operate a manufacturing facility. One could only imagine how much variability there is on this level of decision-making. There are many constant changes in terms of the flow of capital, sales, energy and resource purchasing, market knowledge, and variations in competition, personnel, etc. So it is essential to link the business objectives to the production objectives.
The first part of the course covers topics such as business plans, sales and marketing, financial accounting, project planning, and intellectual property and licensing. Since the whole program is mainly targeted at engineers, this course is broad and does not go into much detail. My academic background is in business, so it was arguably the easiest course for me. However, I did learn a few new things about writing a business plan and I even used it in our own company. I am quite happy with the result. The template that I used was created by Derby Management and could be downloaded free from their website: here.
The second part of the course was organized in case studies that cover different aspects of business management such as leadership and business culture, marketing, operations, technology, etc. This was more difficult but quite interesting as the case studies were well made and informative.
All eight courses took 34 weeks to complete and I had to spend approximately 18 hours per week. Therefore, it is a massive effort, especially if you are not familiar with statistics and probability. I advise you to take a course on those topics before starting those courses. You should also be very familiar with Excel and feel comfortable with algorithms and mathematical models. You do not need to be an engineer to complete this course and you do not need to understand in detail certain manufacturing processes.
To pass the course, you will have to do homework assignments every week and pass an exam at the end of the course. You will receive points for both homework and exams and will need to get at least 60% to get a passing grade. I personally got four A (90 – 100 %) and four B (76 – 89%), with two of them just 1 or 2% away from the A grade.
A big plus of the program is the faculty. All of them, have excellent theoretical training and a good pedagogical approach. However, they have worked on many real projects in various companies and industries. That is why they have a very practical approach to everything they teach.
I believe that the Principles of Manufacturing MicroMasters program could be useful to many people involved in manufacturing. Even though it is a significant investment of time and effort, it is totally worth it. Those principles are truly universal and can be applied in any industry at any level of technological sophistication. Even the advent of the new technologies that are part of Industry 4.0 will not change that. Just on the contrary – Industry 4.0 will introduce new technologies that are significantly more focused on extraction and analysis of data. Data is useful only when one could make well-informed decisions based on it. If you are not familiar with how your business works, then this data is meaningless and such an investment will never payback. And if you are, it will be of great help in increasing the commercial success of the manufacturing business in the long-run.
We have always been committed to increasing the awareness level of the local industry and we have been supportive of organizations such as Lean Institute Bulgaria, Trakia Tech, Stoos Network Bulgaria, and the Professional Association of Robotics and Automation that also want to spread better managerial practices and new technologies. There is a visible trend that local managers and engineers want to increase their knowledge and perfect their skills. I hope that this trend will become even stronger because we need smart people in order to have smart factories.