Gestures and their concurrent words are often said to be meaningfully related and co-expressive. Research has shown that gestures and words are each particularly suited to conveying different kinds of information. In this paper, we describe and compare three methods for investigating the relationship between gestures and words: (1) an analysis of deictic expressions referring to gestures, (2) an analysis of the redundancy between information presented in words vs. in gestures, and (3) an analysis of the semantic features represented in words and gestures. We also apply each of these three methods to one set of data, in which 22 pairs of participants used words and gestures to design the layout of an apartment. Each of the three analyses revealed a different picture of the complementary relationship between gesture and speech. According to the deictic analysis, participant speakers marked only a quarter of their gestures as providing essential information that was missing from the speech, but the redundancy analysis indicated that almost all gestures contributed information that was not in the words. The semantic feature analysis showed that participants conveyed spatial information in their gestures more often than in their words. A follow-up analysis showed that participants contributed categorical information (i.e., the name of each room) in their words. Of the three methods, the semantic feature analysis yielded the most complex picture of the data, and it served to generate additional analyses. We conclude that although analyses of deictic expressions and redundancy are useful for characterizing gesture use in differing conditions, the semantic feature method is best for exploring the complementary, semantic relationship between gesture and speech.