In this relatively short chapter, the issues that relate to methods and instruments that will be used to capture new data, and functionally integrating them with other data are briefly addressed. The quality of data begins with proper experimental design, but is also highly dependent on how the data are captured at the source. As explained earlier, significant parts of non-reproducibility issues are directly related to insufficient details on measurements, and lacking or imprecise information on reagents, instruments, and other elements of the data-creation and capturing process. When data are published, the metadata should also be, as much as possible, readable by workflows that reuse the data. Therefore, ideally, all elements that may influence the reproducibility of results and conclusions drawn from the data in earlier use cases should be part of FAIR metadata. Metadata should also richly describe issues related to sometime proprietary software and data formats coupled with commercially available instruments, but also issues related to the recovery of data from earlier formats; for instance, social media or electronic health records. The role of the data stewards in this step of the research data cycle is probably a bit less central than in other steps, but the actual capture of the data, the richness of metadata (of which the values may have to be captured as well), and other issues that will influence the next steps in the data stewardship cycle need the continuous attention of the data steward in the team.