Coordination and the Internet The Internet changes everything. Not only does the Internet change the applications that people use and the systems with which they interact, it also changes the way we model, design, and build those applications and systems. Moreover, the Internet changes the theories with which we understand computation. All of these changes in theory and practice are manifested in the coordination, broadly understood, of software components. All of these changes are necessary because the Internet promotes---indeed, all but requires---the autonomy of the software components. Coordination is essential if for no other reason than to control this autonomy. It is instructive to consider at least three main forms of autonomy corresponding to the three main roles that people play in networked computing environments. Design autonomy, also termed heterogeneity, reflects the independence of designers and vendors to build their software components as they please. This is realized in the schemas and representational assumptions of the designs and implementations of the various components. Configuration autonomy reflects the independence of system administrators to set up individual hosts and networks as they see fit. This is realized in dynamically linkable libraries and directories, sometimes through arbitrary choices among incompatible libraries and services. Execution autonomy reflects the independence of end users---both consumers and enterprises---to act as they prefer. This is realized in software tools such as browsers and personal assistants. The forms of coordination that arise correspond directly to the forms of autonomy that occur in open environments. Design autonomy forces coordination about schemas and ontologies. When the components are created by different designers, without such coordination, they would not be able to understand each other enough to interoperate coherently. Configuration autonomy forces coordination for resource discovery so that components can be linked up with other components that can supply the services they require. Without such coordination, system administrators would not be able to set up large distributed systems. Execution autonomy forces coordination at run time, e.g., through interaction protocols. Without such coordination, users would not be able to interact coherently and would fail to carry out even the simplest business transactions with each other. Prior to the expansion of the Internet into our daily personal and business activities, all forms of coordination were exercised solely by humans. Moreover, they were exercised before a given distributed system was instantiated and applied. The components of a running distributed system had no autonomy. Designers chose a fixed, usually proprietary, schema and ensured that the various components worked together. System administrators were forced to adopt closed solutions, typically provided by some major vendor. Users were forced to act strictly as required by whatever application program had been configured to run on their computers. For anything nontrivial, users, especially enterprises, were forced to follow a preset sequence of actions that had been deemed acceptable. Because the traditional approaches required human effort, they simply would not scale to Internet-sized systems. Further, for the same reason, they would not be able to accommodate its dynamic nature where components and users can arbitrarily come and go. The obvious solution has been to delay the coordination as much as possible. Thus the scope of design coordination narrows to include only the most basic meta information, so that design decisions about schemas and ontologies can be deferred to configuration. Likewise, the scope of configuration coordination narrows so that a separate phase of configuration prior to execution becomes vanishingly small. With minimal human-supplied information, configuration protocols arrange for automatic resource discovery and binding, leading to efficient, low-cost configurations. Conversely, the scope of execution coordination grows to include many of the tasks of the other two forms of coordination. Discovery protocols apply at run time to autoconfigure a system; moreover, they acquire richer structures to accommodate discovery based on increasingly subtle semantic properties that were heretofore the domain of design coordination. Coordination is a good thing, but contrary to Mae West's famous dictum, too much of a good thing is not necessarily wonderful. Coordination can be expensive to achieve because it requires additional computation and communication beyond the basic application itself. By definition, coordination reduces the autonomy of the participating components. Moreover, whenever applied, it reduces the set of allowed computations, thereby increasing the overhead on computational resources. In other words, coordination is like friction---we need some, but the less we have of it the better. To achieve coordination in this minimalist fashion presupposes subtle models of interaction that enable us to specify the required coordination with great finesse. Such models would need to be integrated with programming models and software architectures. The models would need to be operationalized in infrastructure that accommodates any special properties of the underlying information resources. For the above reasons, coordination would be most naturally realized through an application of agents. The changes that the Internet brings about not only affect wide-area public networks, but virtually all forms of networked computing. This is because what was essential for the Internet at large is highly desirable even for smaller networked computing environments such as within enterprises. Indeed, the additional knowledge available to designers and components alike in such environments facilitates the development of richer forms of coordination, which lead to greater efficiency and effectiveness of the resulting distributed systems. For this reason, I believe that the scope of coordination includes all of modern computing. To live up to our own expectations, however, we will need to develop increasingly rich models for coordinating components and increasingly sophisticated platforms for realizing and operationalizing those models. That great progress is being made along the above lines is evidenced by the excellent volume you are now reading. I applaud the editors, Andreas Omicini, Franco Zambonelli, Robert Tolksdorf, and Matthias Klusch, for the quality of the works that they have assembled and organized for our reading pleasure. Enjoy! Munindar P. Singh Raleigh, North Carolina