Data Security: Best Practices

In the bustling, often chaotic, realm of research, data isn’t just an output; it’s the very lifeblood of discovery and innovation. It’s the tangible manifestation of countless hours, painstaking effort, and profound intellectual curiosity. Ensuring its integrity and, crucially, its confidentiality, isn’t just a quaint notion or a fleeting best practice—it’s an absolute, non-negotiable necessity. Think of it this way: your research data is like the carefully cultivated soil that nourishes the seeds of groundbreaking ideas. Without a robust, secure foundation, that soil can erode, taking all your hard work with it. Let’s really dig in, shall we, and explore the robust, actionable strategies you can deploy right now for storing and securing your invaluable research data.

The Foundational Pillars: Organizing Your Data Like a Pro

Imagine walking into a library where every single book is just piled haphazardly on the floor, or a laboratory where samples are labelled with scribbles on sticky notes that inevitably peel off. Frustrating, isn’t it? A colossal waste of time. The same principle applies to your digital research data. A well-organized data system doesn’t just save you precious hours of frantic searching; it dramatically slashes the potential for errors, boosts reproducibility, and frankly, makes your life a whole lot easier when collaboration calls. It’s about creating a clear, logical pathway, not a labyrinth.

Keep data accessible and protected TrueNAS by The Esdebe Consultancy is your peace of mind solution.

Crafting Impeccable File Naming Conventions

Consistency, my friend, is your watchword here. We’ve all been there: that moment of cold dread when you’re staring at a folder full of files like ‘Data1.xlsx,’ ‘final_results.csv,’ or worse, ‘experiment_stuff_new_new.doc.’ You’re left guessing, scratching your head, trying to recall which ‘new’ was the actual ‘final’ one. It’s a common pitfall, but one that’s easily avoidable.

Instead, adopt descriptive and utterly consistent naming conventions. This isn’t just about neatness; it’s about creating a self-documenting system that speaks volumes. For instance, swap that enigmatic ‘Data1.xlsx’ for something like ‘2025_StudyName_ParticipantID001_SurveyResults_v03.xlsx’ or ‘20240315_ProjectX_ExperimentB_RawSpectra_Trial01.csv.’ See the difference? Immediately, you grasp the who, what, when, and even the version number. This clarity is an absolute godsend for quick identification, effortless retrieval, and, critically, for maintaining sanity when you’re elbow-deep in a complex analysis or returning to a project months, or even years, later.

When designing your naming structure, consider including key elements:

  • Date (YYYYMMDD or YYYY-MM-DD): Provides a clear chronological anchor.
  • Project/Study Identifier: Crucial if you’re juggling multiple projects.
  • Experiment/Trial/Sample ID: Pinpoints specific units of data.
  • Data Type/Content: Clearly indicates what’s inside (e.g., ‘RawData,’ ‘CleanedData,’ ‘AnalysisScript,’ ‘Figure’).
  • Version Number: Essential for tracking iterations (more on this later, but it’s a good practice to integrate).
  • Initials of Creator (optional, but helpful in multi-person teams): Adds an extra layer of context.

And please, for the love of all that is reproducible, avoid special characters, spaces (use underscores or hyphens instead), and excessively long names that get truncated by operating systems. Short, descriptive, and consistent – that’s the mantra.

Building a Robust Folder Structure

Once you’ve mastered naming, the next logical step is housing those beautifully named files in an equally intuitive home. Develop a logical hierarchy for your folders, mirroring the structure of a well-indexed library or a meticulously organized lab bench. Don’t just dump everything into a single ‘Research’ folder; that’s a recipe for digital disaster. Start with broad, top-level categories and then progressively narrow down to specifics. Think of it as drilling down through layers of information.

A commonly adopted and highly effective structure might look something like this:

  • Project_Name_YYYY (Root folder for the entire project)
    • 01_Documentation (Protocols, ethics approvals, grant applications, meeting notes, README files)
    • 02_RawData (Untouched, original data. Never, ever modify files in here!)
      • Date_Experiment_A
      • Date_Experiment_B
    • 03_ProcessedData (Cleaned, transformed, or pre-analyzed data, derived from raw data)
      • Phase1_Cleaned_Dataset
      • Phase2_Transformed_Dataset
    • 04_AnalysisScripts (Code, scripts, statistical packages used for analysis)
      • R_Scripts
      • Python_Scripts
    • 05_Outputs (Results, figures, tables, reports generated from analyses)
      • DraftFigures
      • FinalFigures_Paper1
    • 06_Presentations (Any presentations related to the project)
    • 07_Publications (Manuscript drafts, preprints, submitted versions)

This tiered approach makes it incredibly easy for you, and anyone else collaborating with you, to immediately understand where everything lives. It reduces cognitive load and saves considerable time. And a crucial tip: always include a ‘README.txt’ or ‘README.md’ file in your root project folder. This file should be a mini-map, explaining the overall project, the folder structure, naming conventions, and any key abbreviations. It’s like leaving breadcrumbs for your future self, or for that new grad student who just joined the lab!

The Indispensable Role of Data Documentation

While good naming and folder structures are fantastic, they’re only part of the story. True data organization extends into robust documentation. This means creating metadata, data dictionaries, codebooks, and comprehensive digital lab notebooks. Without these, your perfectly named files can become an enigma to anyone but you, and quite often, even to your future self. Have you ever looked at a dataset six months after collecting it and wondered, ‘What did ‘Var_07′ even represent again?’ I certainly have, and it’s a frustrating moment that could have been avoided.

  • Metadata: This is ‘data about data.’ It describes the content, context, quality, and condition of your research data. Think about who created it, when, what instruments were used, the units of measurement, and any specific parameters. Embedding metadata directly within files (like in image EXIF data) or having separate metadata files (e.g., XML, JSON) is vital.
  • Data Dictionaries/Codebooks: For quantitative data, this is non-negotiable. A data dictionary lists every variable, its definition, units of measurement, valid ranges, and codes used (e.g., ‘1 = Male, 2 = Female’). For qualitative data, a codebook defines your thematic codes and how they were applied.
  • Digital Lab Notebooks: Move beyond paper. Tools like Evernote, OneNote, or dedicated Electronic Lab Notebook (ELN) software allow you to record experimental procedures, observations, challenges, and decisions in real-time, linked directly to your digital files. This provides crucial context for reproducibility.

Why is this documentation so critical? Because research isn’t a solo sprint; it’s often a collaborative marathon. For others (and for you, years down the line) to truly understand, validate, and build upon your work, they need context. It’s the difference between handing someone a beautifully bound novel and a pile of unnumbered, unlabeled loose pages.

Building the Fortress: Backing Up Your Data

Ever had your computer decide to spontaneously combust (digitally speaking, of course) and lose weeks, months, or even years of invaluable research files? It’s a gut-wrenching experience, a nightmare that haunts many researchers. I’ve known colleagues who’ve faced this dreaded scenario, and the sheer panic, the cold sweat, is palpable. To prevent such catastrophic data loss, you must adhere to the gold standard: the 3-2-1 backup rule. It’s simple, elegant, and incredibly effective.

The 3-2-1 Rule Explained

  • Three Copies: This means you should always maintain the original data and at least two additional copies. Why three? Because redundancy is your best friend. If one copy fails (and drives do fail, believe me), you have two others to fall back on. This isn’t just about copying a file from one folder to another on the same hard drive; that’s not a true backup strategy.
  • Two Different Media: Don’t put all your eggs in one basket. Store your backups on at least two distinct types of storage media. For instance, you could have your working copy on your computer’s internal SSD, one backup on a robust external hard drive, and the third on a cloud storage service. Diversifying media protects against a single point of failure related to a specific technology or device. Imagine a power surge frying all connected USB drives – if your only backups were on those, you’d be in trouble. Common media types include:
    • External Hard Drives/SSDs: Portable, relatively inexpensive, good for local backups.
    • Network Attached Storage (NAS): Personal or departmental server, offering centralized storage and often RAID redundancy for better protection.
    • Cloud Storage: Services like Google Drive, OneDrive, Dropbox, Box, or specialized institutional cloud solutions. Excellent for off-site storage, accessibility, and often include versioning features. Be mindful of institutional policies and data sensitivity when choosing public cloud providers.
    • Tape Drives: For extremely large datasets or long-term archival, though less common for daily researcher use.
  • One Off-Site: At least one of your backup copies must be stored in a physically separate, off-site location. This is your ultimate protection against localized disasters. Think about it: a fire, flood, or even theft at your lab or home office could wipe out all your locally stored data, even if it’s on multiple devices. An off-site backup ensures continuity. This can be achieved by using cloud storage (which inherently provides off-site storage), taking an external drive home with you each day, or utilizing an institution’s geographically dispersed data centers.

Implementing a Smart Backup Schedule and Crucial Testing

Having the right media and copies is only half the battle. When and how often you back up, and critically, whether you test those backups, truly defines your resilience. There’s nothing quite like the false sense of security derived from ‘backing up’ regularly, only to discover, when disaster strikes, that the backups themselves are corrupted or incomplete.

  • Backup Frequency: This should align with the rate at which your data changes. Are you generating new data daily? Then daily backups are a must. Working on a manuscript that updates weekly? A weekly backup might suffice, coupled with more frequent incremental saves to your local drive. Automated backup solutions, whether through operating system utilities or third-party software, are highly recommended. They remove the human element of forgetfulness.
  • Testing Your Backups: This is the step most frequently overlooked, yet it’s absolutely paramount. A backup you can’t restore is, quite frankly, no backup at all. Periodically, pick a random file or folder from your backup, restore it to a different location (never overwrite your original working data during a test!), and verify its integrity. Can you open it? Is it complete? Does it make sense? Imagine trying to run a key analysis for a paper deadline only to find your critical dataset backup is corrupted. It’s happened, and it’s a special kind of agony.

So, remember the 3-2-1 rule, automate where possible, and always test your backups. Your future self will thank you profusely.

Navigating the Tides: Mastering Version Control

In the dynamic world of research, especially in collaborative projects, data isn’t static. It evolves. Files get updated, analyses get refined, code gets tweaked. Without proper tracking, multiple file versions can quickly lead to utter chaos and confusion. Whose version is the ‘right’ one? Did someone accidentally overwrite the most recent changes? These are the kinds of questions that can derail a project faster than you can say ‘reproducibility crisis.’ Implementing robust version control helps you navigate these choppy waters with confidence.

Beyond Simple Manual Tracking

Yes, you can manually track versions by adding ‘v1,’ ‘v2,’ ‘final,’ ‘final_final,’ or dates to file names. We’ve all done it. But let’s be honest, it’s cumbersome, highly prone to human error, and completely breaks down in multi-person collaborations. ‘Final_final_use_this_one_really.docx’ is not a sustainable version control strategy for serious research.

Embracing Automated Version Control Tools

This is where automated tools become your superpower. Software like Git, originally designed for software development, has become an indispensable asset for researchers, particularly those working with code, scripts, or even text-based data. Git provides a distributed version control system that allows you to:

  • Track Every Change: Git meticulously records every modification, by whom, and when. It’s like a comprehensive ledger for your project’s evolution.
  • Revert to Previous Versions: Made a mistake? Introduced a bug? Git allows you to seamlessly roll back to any previous state of your project, restoring stability and preventing data loss.
  • Facilitate Collaboration: Multiple researchers can work on the same files simultaneously without fear of overwriting each other’s work. Git handles ‘merging’ changes and highlights conflicts that need manual resolution.
  • Branching and Merging: Experiment with new ideas or analyses in separate ‘branches’ without affecting the main project. Once validated, you can ‘merge’ these changes back in.
  • Commit Messages: Each set of changes is accompanied by a ‘commit message,’ forcing you to document what you did and why, adding another layer of invaluable context.

While Git is powerful, it has a learning curve. For very large datasets or non-text files (like large image files or compiled binaries), Git’s core design might not be optimal. In such cases, consider Git Large File Storage (LFS) or leverage the built-in versioning features offered by many cloud storage providers (e.g., Google Drive’s version history, Box’s file versioning). These tools are generally simpler to use for non-code files and can still provide crucial historical snapshots.

When is version control essential? Anytime you’re working on something that evolves, especially if multiple people are involved. Think scripts, analysis pipelines, manuscript drafts, or even evolving datasets. When a grant proposal required re-running an old analysis for a revised submission, I was incredibly grateful for the Git repository that let me quickly pinpoint the exact code version used for the original results. It saved me days of frantic detective work!

Fortifying the Perimeter: Securing Your Data

Protecting your research data from unauthorized access, accidental loss, or malicious threats is paramount. A data breach isn’t just an inconvenience; it can devastate a project’s credibility, expose sensitive participant information, and even lead to legal repercussions. Think of your data as a vault: you need multiple layers of robust protection. So, let’s explore these critical security measures.

The Power of Encryption

Encryption is your digital padlock, transforming your data into an unreadable format without the correct key. It’s a fundamental layer of defense. You need to consider encryption both ‘at rest’ (when data is stored on a device) and ‘in transit’ (when data is moving across networks).

  • Encryption at Rest:
    • Full Disk Encryption: Tools like BitLocker (Windows), FileVault (macOS), or VeraCrypt (cross-platform and open-source) encrypt your entire hard drive. If your laptop gets stolen, the data on it is unreadable without the password.
    • File/Folder Encryption: For specific sensitive files or folders, you can use encryption tools within operating systems or third-party software. This allows you to protect individual pieces of data without encrypting the entire drive.
    • Encrypted Cloud Storage: Many cloud providers offer encryption of data at rest on their servers. However, for highly sensitive data, consider client-side encryption, where you encrypt the data before uploading it to the cloud. This ensures that even the cloud provider cannot access the unencrypted data.
  • Encryption in Transit:
    • Whenever you transmit data over a network (e.g., uploading to a cloud service, sharing with a colleague), ensure the connection is secure. Look for ‘https://’ in web addresses, indicating Secure Sockets Layer/Transport Layer Security (SSL/TLS) encryption. This scrambles the data as it travels, protecting it from eavesdropping. Avoid sending sensitive data over unencrypted email or public Wi-Fi without a Virtual Private Network (VPN).

Remember, the strongest encryption is only as good as the password or key protecting it. Use long, complex, unique passphrases, and consider a reputable password manager.

Granular Access Control

Not everyone needs access to everything, all the time. Implementing the ‘principle of least privilege’ is crucial: grant individuals only the minimum necessary access to perform their tasks. This drastically limits the potential damage if an account is compromised or if someone makes an accidental error.

  • Role-Based Access Control (RBAC): Assign permissions based on roles within your research team (e.g., Principal Investigator, Research Assistant, Data Analyst). A PI might have full access to raw and processed data, while a research assistant might only have access to specific subsets of cleaned data for analysis.
  • Strong User Authentication: Go beyond simple passwords. Implement multi-factor authentication (MFA) whenever possible. Requiring a second verification step (like a code from your phone or a biometric scan) makes it exponentially harder for unauthorized users to gain access, even if they somehow guess your password.
  • Regular Review and Audits: Access permissions aren’t static. People leave projects, roles change. Regularly review who has access to what, and remove permissions that are no longer needed. Check access logs for any suspicious activity or unauthorized attempts.
  • Segregation of Duties: For extremely sensitive or critical data, consider having different people responsible for different aspects of data management (e.g., one person handles raw data archiving, another handles data cleaning, another handles analysis). This adds another layer of control and accountability.

I once worked on a project where a junior researcher accidentally deleted a critical dataset from a shared drive. Thankfully, we had robust access controls in place, limiting write permissions, and a solid backup strategy. But it highlighted how easily human error can occur, and why limiting broad access is so vital.

Fortifying Physical Security

In our increasingly digital world, it’s easy to forget about the physical security of your devices. Yet, a stolen laptop or an unsecured server can be just as devastating as a cyberattack.

  • Secure Locations: Store physical devices containing research data (servers, external hard drives, laptops) in secure, controlled environments. This means locked offices, secure server rooms, and restricted access areas. Consider environmental controls like temperature and humidity monitoring for servers, and uninterruptible power supplies (UPS) to protect against power outages.
  • Device Security: Always use strong passwords on your devices. Lock your screen when you step away. Use physical security measures like laptop locks. When disposing of old hardware, ensure all data is securely wiped (not just deleted) using data sanitization tools or physical destruction.
  • Visitor Protocols: If you have visitors in your workspace, ensure sensitive data or devices are not easily accessible or visible. A simple protocol of escorting visitors can make a big difference.

Bolstering Network Security

Your data doesn’t live in isolation; it traverses networks. Protecting these pathways is crucial.

  • Firewalls: Configure both hardware and software firewalls to control incoming and outgoing network traffic, blocking unauthorized access attempts.
  • Intrusion Detection/Prevention Systems (IDS/IPS): These systems monitor network traffic for suspicious activity or known attack patterns, alerting you or automatically blocking threats.
  • Virtual Private Networks (VPNs): When working remotely or accessing sensitive data from an unsecured network (like public Wi-Fi), always use a VPN. A VPN creates a secure, encrypted tunnel between your device and your institution’s network, protecting your data from interception.
  • Secure Wi-Fi: Ensure your home or office Wi-Fi network is password-protected with WPA2 or WPA3 encryption. Avoid using open or public Wi-Fi networks for sensitive work.
  • Patch Management: Keep your operating systems, software, and applications updated. Software updates often include critical security patches that fix vulnerabilities exploited by attackers. Ignoring these updates leaves gaping holes in your defenses.

The Ethical Imperative: Data Anonymization and Pseudonymization

For research involving human participants or sensitive information, simply securing data isn’t enough; you must protect individual identities. This is where anonymization and pseudonymization come into play.

  • Anonymization: This is the process of irreversibly removing or encrypting personally identifiable information (PII) from data so that the individual cannot be identified, directly or indirectly. True anonymization is incredibly difficult to achieve and often involves significant data transformation (e.g., generalization, suppression, aggregation) which can sometimes reduce data utility. For example, replacing specific birth dates with age ranges.
  • Pseudonymization: This involves replacing direct identifiers with a pseudonym or artificial identifier. While the direct identifiers are removed, it’s still possible to re-identify the individual if the link between the pseudonym and the original identifier is held separately and securely. This offers a balance between privacy protection and data utility. For instance, assigning a unique random ID to each participant, with the key linking that ID back to their name stored in a highly secure, separate location.

Both techniques minimize risk in case of a data breach and are often required by ethical review boards and data protection regulations like GDPR or HIPAA. This isn’t just a technical step; it’s a profound ethical responsibility.

Preparing for the Worst: An Incident Response Plan

Even with the most robust security measures, breaches can occur. No system is 100% impenetrable. Therefore, having an incident response plan is not optional; it’s critical. What happens if a server is compromised, or a laptop is stolen, or sensitive data is accidentally leaked? Knowing what to do before it happens can mitigate damage and ensure compliance with reporting obligations.

Your plan should outline steps for:

  1. Identification: How do you detect a breach?
  2. Containment: How do you stop the breach from spreading?
  3. Eradication: How do you remove the threat and restore systems?
  4. Recovery: How do you get back to normal operations and ensure data integrity?
  5. Post-Incident Review: What lessons can be learned to prevent future incidents?

This isn’t just about technical fixes; it’s about communication, legal reporting (e.g., notifying affected individuals or regulatory bodies), and reputational management.

A Real-World Example: University of Washington’s Comprehensive Data Security

The University of Washington Libraries serve as a fantastic case study, demonstrating how a large institution approaches data security with a multi-pronged strategy. They understand that research data isn’t just digital files; it represents trust and scientific credibility.

Their approach isn’t just about ticking boxes; it’s deeply integrated into their research support services:

  • Secure Storage Emphasis: They guide researchers toward secure storage solutions, stressing the dual importance of strong passwords and robust encryption. This isn’t just for servers; it extends to individual devices and the cloud. They likely have institutional agreements with cloud providers that meet stringent security standards, giving researchers safe, vetted options instead of forcing them to navigate the wild west of consumer cloud services.
  • Rigorous Access Control: UW implements role-based access meticulously. This means a complex hierarchy of permissions: a Principal Investigator might have full read/write access to all project data, while a specific data analyst might only have read access to certain datasets needed for their specific tasks. A new student researcher, perhaps, starts with very limited access, which expands only as their role and trustworthiness evolve. They likely use centralized identity management systems to enforce these policies, simplifying the process for researchers and IT staff alike.
  • Proactive Data Anonymization: Recognising the sensitivity inherent in much academic research, especially that involving human subjects, UW strongly advocates for and provides resources on data anonymization and pseudonymization. This isn’t just a recommendation; it’s often a prerequisite for storing certain types of data, aligning with ethical guidelines and legal mandates like HIPAA or GDPR. They would train researchers on techniques like k-anonymity or differential privacy, ensuring participant identities are fiercely protected, thereby minimizing risk in the unfortunate event of a data breach. Their focus is clearly on preventing harm from the outset, not just reacting to it.

What’s particularly valuable about the University of Washington’s approach is their holistic view. They don’t just provide tools; they educate researchers, set clear policies, and build an ecosystem where data security is woven into the very fabric of research practice. It’s an institutional commitment to safeguarding the scholarly enterprise.

The Legal and Ethical Imperative: Regulatory Compliance

Beyond just good practice, a significant driver for robust data storage and security is the ever-growing landscape of legal and ethical regulations. Ignore these at your peril; the consequences can be severe, ranging from hefty fines to reputational damage, and even the inability to publish your findings.

  • General Data Protection Regulation (GDPR): If your research involves data from EU citizens, or if your institution operates within the EU, GDPR is non-negotiable. It mandates strict rules for collecting, storing, and processing personal data, including requirements for data minimization, consent, data breach notification, and the ‘right to be forgotten.’
  • Health Insurance Portability and Accountability Act (HIPAA): For health-related research in the U.S., HIPAA sets the standard for protecting Protected Health Information (PHI). This includes stringent requirements for administrative, physical, and technical safeguards.
  • Institutional Review Boards (IRBs) / Ethics Committees: These committees, fundamental to ethical research, will scrutinize your data management plan, particularly how you handle consent, data privacy, and security. Adherence to their directives is essential for research approval.
  • Funder Requirements: Many granting agencies (e.g., NIH, NSF) now mandate Data Management Plans (DMPs) that detail how data will be stored, secured, and shared. Non-compliance can jeopardize future funding.
  • Data Sharing Agreements (DSAs) and Data Use Agreements (DUAs): When sharing data with external collaborators or obtaining data from other sources, these legal agreements define the terms of data access, use, and security. They are critical for ensuring everyone understands their responsibilities.

Understanding and adhering to these regulations isn’t just about avoiding penalties; it’s about upholding the trust placed in researchers by participants, funders, and the broader scientific community. It’s an ethical compass guiding your data journey.

Concluding Thoughts: Your Data, Your Responsibility

Look, nobody wants to be the person who lost a decade’s worth of research because of a forgotten backup or an easily guessed password. Implementing these best practices—meticulously organizing, diligently backing up, smartly versioning, and rigorously securing your data—isn’t just about protecting files; it’s about safeguarding your intellectual investment, preserving scientific integrity, and ensuring your work can stand the test of time. It’s about building a foundation for truly impactful, trustworthy research outcomes that you can confidently share with the world.

It might seem like a lot to take in, perhaps even a bit overwhelming at first. But remember, you don’t have to overhaul everything overnight. Start small. Pick one area, like implementing consistent file naming, and master it. Then move to the next. Even small improvements accumulate into a formidable defense. Your research journey is a marathon, not a sprint, and your data is the most valuable asset in that race. Treat it with the respect and diligent care it deserves. After all, the next breakthrough might just be lurking in that perfectly organized, securely backed-up folder.

References

  • University of Washington Libraries. (n.d.). ‘Implementing: Storing and Securing Your Data.’ Retrieved from guides.lib.uw.edu/research/dmg/storage-security
  • University of Northern Colorado. (n.d.). ‘Best Practices for Data Preservation.’ Retrieved from libguides.unco.edu/datamgt/bestpractices
  • UC San Diego Library. (n.d.). ‘Best Practices for Data Management.’ Retrieved from library.ucsd.edu/research-and-collections/research-data/plan-and-manage/data-management-best-practices.html
  • University of Maine. (n.d.). ‘Data Security Best Practices.’ Retrieved from umaine.edu/research-compliance/research-security/data-security-best-practices/
  • California State University Sacramento. (n.d.). ‘Data Security and Privacy in Research Data Management (RDM).’ Retrieved from csus.libguides.com/RDM/security

5 Comments

  1. The point about data anonymization and pseudonymization is critical, particularly with increasing privacy regulations. What strategies have you found most effective in balancing data utility with strong privacy protection when dealing with sensitive research data?

    • That’s a great question! I’ve found differential privacy techniques offer a solid balance, adding statistical noise to the data while preserving its overall utility for analysis. Also, employing secure multi-party computation allows analysis without directly exposing the raw, sensitive data. It’s an evolving field, but these approaches show great promise! What are your thoughts?

      Editor: StorageTech.News

      Thank you to our Sponsor Esdebe

  2. So, are we suggesting “experiment_stuff_new_new.doc” is *not* the gold standard in file naming? Asking for a friend currently staring at a folder full of “draft3_comments_FINAL.docx,” “draft3_comments_FINAL_revised.docx,” and a mysterious “Document1.docx.” Is there hope for them (me)?

    • Haha, I feel your friend’s pain! While “experiment_stuff_new_new.doc” might be *a* standard, it’s definitely not *the* gold standard. There’s absolutely hope! A little structure goes a long way. Maybe try adding dates or version numbers to your files to establish a better system.

      Editor: StorageTech.News

      Thank you to our Sponsor Esdebe

  3. The point about integrating data security into the research support services is key. Training researchers on secure practices, alongside providing vetted storage options, creates a culture of responsibility and significantly reduces vulnerabilities.

Leave a Reply to StorageTech.News Cancel reply

Your email address will not be published.


*