Contemporary research in artificial environments has marked the need for autonomy in artificial agents. Autonomy has many interpretations in terms of the field within which it is being used and analyzed, but the majority of the researchers in artificial environments are arguing in favor of a strong and life-like notion of autonomy. Departing from this point the main aim of this paper is to examine the possibility of the emergence of autonomy in contemporary artificial agents. The theoretical findings of research in the areas of living and cognitive systems, suggests that the study of autonomous agents should adopt a systemic and emergent perspective for the analysis of the evolutionary development of the notions/properties of autonomy, functionality, intentionality and meaning, as the fundamental and characteristic properties of a natural agent. An analytic indication of the functional emergence of these concepts and properties is provided, based on the characteristics of the more general systemic framework of second-order cybernetic and of the interactivist framework. The notion of emergence is a key concept in such an analysis which in turn provides the ground for the theoretical evaluation of the autonomy of contemporary artificial agents with respect to the functional emergence of their capacities. The fundamental problems for the emergence of genuine autonomy in artificial agents are critically discussed and some design guidelines are provided.