“We see it as our jobs to not only understand the open science movement, but to drive it.”

Rusty Speidel

Rusty Speidel

Rusty Speidel is Marketing Director at the Center for Open Science (COS). He has had a long career in digital technology and technology marketing, and spent his early career designing and building online scholarly peer-review and article submission systems for a variety of scientific and medical journals. He also designed and developed some of the first digital CME products for cardiology and hematology. He has also designed digital marketing programs in the e-commerce, gaming, green energy and travel industries. 

 

Can you tell us what your role at the Center for Open Science involves?

I am the Marketing Director. As such I consider myself the chief communications person and head cheerleader/evangelist for the organization and our products. I am also responsible for all of our marketing infrastructure and for developing programs to increase awareness and adoption of the Open Science Framework (OSF) and open research practices. It’s a really exciting time to be involved, as we see open access, open science, and reproducibility take a very prominent place in the community discourse. My role is to explain the benefits of those practices and the role OSF and COS can play in making them a reality for most researchers.

What are the 3 biggest challenges of developing infrastructure for data?

For us and the Open Science Framework, we have one major goal: adoption of open science practices. To do that, we want to meet researchers where they are. We want them to adopt open and reproducible research practices, but we realize that if we ask them to move over to environments that take them out of their usual work cycle, we probably will not be successful.  We are working to provide a platform that understands, simplifies, and streamlines their workflow from initial planning through preprints and publication.

To enable those capabilities, we have three major challenges: connecting and maintaining application program interfaces to most if not all major research software packages; understanding and managing large scale project hierarchies; and eventually, global authentication using OrcID and other identity management systems. We also must ensure that those connections are fast, secure, scalable and modular, manage file versions, provide control over public and private information, and enable levels of communication between collaborators that meet or exceed what they currently have. It’s a lot to keep track of.

Are there any unexpected or unusual resources that are required?

From a technology or expertise perspective, understanding cloud-based development is essential. It keeps the user experience light and clean and we can develop updates and improvements much more rapidly. We also took a lot of time designing a modular system architecture that can scale pretty much indefinitely without becoming too cumbersome to be useful.

How do you keep up-to-date when science (and open science) is changing at such a quick pace?

We were founded by research scientists who understood the need for openness and reproducibility through their own work, and so we see it as our jobs to not only understand the open science movement, but to drive it. Since we are non-profit, we can focus on what’s best for science and the transparent sharing of research, not necessarily what drives our bottom line. For us, the adoption of open practices is our business. So we try and maintain a leadership position that is agnostic to commercial interests and focused on advancing openness.

How do you think the open data infrastructure will look in 5 years’ time?

Aside from the obvious improvements to hardware and networks, I think it all really comes down to reducing complexity and increasing openness. On the research side, we will see increased connectivity between known software and storage platforms such as R, Overleaf, figShare, Box, and Amazon S3. We will also see more comprehensive workflow and collaboration tools, templates and Electronic Laboratory Notebooks to make it even easier to gather data, analyze it for insights, share the data, revise and then publish the results. In a truly open world, these results would then be picked up by another research team for reproduction and hopefully vindication.

On the publishing side, I think the model will continue to disrupt and evolve towards more open-source platforms, localized peer review and self-publishing in response to the expensive, limited publishing model of the current environment. We can already see publishers moving to create “social research” platforms to try and keep a hand in this movement towards openness. With comprehensive project tools such as the OSF in the hands of researchers, their own organizational drive will be all that’s really needed to move research forward. We have already launched our OSF|Preprints service in anticipation of some of these moves, and have built a Globally Unique Identifier protocol to ensure each project is universally unique so that it can be found and cited forever.

previous post

INCF & F1000Research promote open access at Neuroinformatics 2016

next post

Visions in Science: slam, convince and win!