Home /All Footprints / From Productivity Vortex, Toxic Trait Tools Lore to Local-First Tools and Digital Preservation
From Productivity Vortex, Toxic Trait Tools Lore to Local-First Tools and Digital Preservation
Mar 11, 2024 · 6 min read
Since elementary school, I’ve been fascinated with tech trends, but that doesn’t mean I always splurge on the latest gadgets. When the productivity software boom hit, I got sucked into the vortex of gurus and optimization hacks. It felt like everyone was promising the secret to becoming a superhuman productivity machine.
Footnotes
Maisie Hill, Period Power: Harness Your Hormones and Get Your Cycle Working For You (Green Tree, 2019). This book explores the connection between menstrual cycles and productivity, offering insights and strategies specifically for women. ↩
Hustle culture often promotes overworking and constant striving for success. While it can drive initial motivation, it’s often unsustainable and can lead to burnout. ↩
Rationalism is a philosophy emphasizing reason and logic. In a productivity context, it can involve data-driven approaches and optimizing workflows for maximum efficiency. ↩
Obsidian philosophy centers around the idea of a “file over app” – a digital repository for thoughts, ideas, and knowledge that’s completely customizable and owned by the user. ↩
The term “second brain” refers to a digital system for organizing information, acting as an extension of memory and cognitive processes. ↩
Vint Cerf, a pioneer of the Internet, warned about the potential for a “digital dark age” where data and information are lost due to obsolete technology. (Source: ” Vint Cerf calls a ‘digital Dark Age’“) ↩
Martin Kleppmann, Adam Wiggins, Peter van Hardenberg, and Mark McGranaghan. Local-first software: you own your data, in spite of the cloud. 2019 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software (Onward!), October 2019, pages 154–178. doi:10.1145/3359591.3359737 This paper advocates for software that prioritizes user ownership and control of data, even in cloud environments. ↩
Dario Amodei et al., “Concrete Problems in AI Safety” (arXiv preprint arXiv:1606.06565, 2016). ↩
Yampolskiy, Roman V. “Artificial intelligence safety and cybersecurity: a timeline of AI failures.” arXiv preprint arXiv:1904.01183 (2019). ↩
Zachary C. Lipton, “The mythos of model interpretability,” Queue 16, no. 3 (2018): 31-57. ↩
Solon Barocas and Andrew D. Selbst, “Big data’s disparate impact,” California Law Review 104 (2016): 671. ↩
Arnold, Thomas, Daniel Kasenberg, and Matthias Scheutz. “Value alignment or Misalignment–What Will Keep Systems Accountable?.” In Workshops at the Thirty-Second AAAI Conference on Artificial Intelligence. 2018. ↩
Jess Whittlestone, Rune Nyrup, and Stephen Cave. “The role and limits of principles in AI ethics: towards a focus on tensions.” Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. (2019). ↩
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford University Press, 2014). ↩