Blog

Webinar Recap: Data Science 2.0 and Scaling Remote Teams

Posted by Tyler Whitehouse on Jun 30, 2020 3:09:15 PM

This recaps our first webinar of June 23, 2020. It was fun and we wanted to give access to the video.

The webinar demoed creating portable and reproducible work in Jupyter and RStudio, as well as an easy system for transferring work between CPU and GPU resources. It further explained why decentralization, not centralization, is best for collaboration and productivity in data science.  The current remote work situation makes this decentralized approach even more critical.

In the webinar Dean (CTO) and Tyler (CEO):

  • Outlined the technical problems of collaboration and managing data science work;
  • Related this problem to cost and productivity concerns;
  • Explained "centralized vs decentralized" and why decentralization is better;
  • Explained how local automation can make decentralization robust & scalable;
  • Demonstrated Gigantum's Client + Hub model for scaling collaboration and productivity.

Decentralization means letting data scientists work across resources in a self-service fashion. For us, it also means container native, not just cloud native. It is that simple.

The key to decentralization is automation and a UI at the local level, not as a monolithic, managed cloud  service. We call this "Self Service SaaS", which is sort of a silly phrase but captures what we mean.

Self Service SaaS takes the good parts of the SaaS experience, i.e. nice UIs and automation around difficult tasks, and eliminates the bad parts, i.e. zero control over deployment and everything that entails.

Check out the video and let us know what you think. We love to talk about this stuff and we want to hear your story and your problems. You can watch by filling out the form below.

Read More

Topics: Data Science, Containers, Git, Jupyter, RStudio

Making Reproducibility Reproducible

gigantum blog post 12

Reproducibility doesn’t have to be magic, anymore. This image is provided by Abstruse Goose under the Creative Commons License

TL;DR - We believe the following

  • Approaches to the transmission of scientific knowledge are currently broken, mainly due to the criticality of software in modern research.
  • Calling re-execution of static results “reproducibility” isn’t enough. Reproducibility should be functionally equivalent to collaboration.
  • Academic emphasis on best practices is ineffective and should switch to a product based approach that minimizes effort rather than maximizes it.
  • By focusing on the needs of the end user, people can actually improve how scientific knowledge is communicated and shared.
Read More

Topics: Science, Reproducibility, Data Science, Containers, Jupyter