Tutorials - GEN - Genetic Engineering and Biotechnology News https://www.genengnews.com/category/resources/tutorials/ Leading the way in life science technologies Mon, 11 Sep 2023 18:55:03 +0000 en-US hourly 1 https://wordpress.org/?v=6.3 https://www.genengnews.com/wp-content/uploads/2018/10/cropped-GEN_App_Icon_1024x1024-1-150x150.png Tutorials - GEN - Genetic Engineering and Biotechnology News https://www.genengnews.com/category/resources/tutorials/ 32 32 Single-Cell Spatial Proteomics by Molecular Pixelation https://www.genengnews.com/resources/tutorials/single-cell-spatial-proteomics-by-molecular-pixelation/ Fri, 01 Sep 2023 11:03:21 +0000 https://www.genengnews.com/?p=270613 In this tutorial Filip Karlsson, co-founder and CTO of Pixelgen Technologies, describes a DNA-based visualization technology for mapping cell surface proteins and their spatial interrelationships.

The post Single-Cell Spatial Proteomics by Molecular Pixelation appeared first on GEN - Genetic Engineering and Biotechnology News.

]]>
By Filip Karlsson

The spatial distribution of cell surface proteins, which governs vital processes of the immune system such as inter-cell communication and mobility, has proven difficult to assess. New tools are needed that not only capture spatial organization of immune cells, but also multiplex at a high level while delivering high resolution and throughput.

Flow cytometry using fluorophore-labeled antibodies has been extensively used to study proteins on immune cells for several decades. More recently, efforts have been made to overcome the multiplexing limitations of conventional flow cytometry by instead labeling antibodies with isotopes for mass spectrometry readout, or with oligonucleotides for next-generation sequencing readout.

Although these approaches can be used to characterize and phenotype cells at high multiplex and throughput, the information they provide pertains only to the abundance of each target protein on each cell. They do not describe the spatial organization of the targeted molecules.

Fluorescence microscopy has traditionally been used to study the spatial organization of proteins on single cells, but multiplexing is limited to a few targets due to the spectral properties of fluorophores, and the signal-to-noise ratio suffers from autofluorescence and spectral bleed-through between channels. Furthermore, the view provided by each microscopy image is limited to a selected focal plane, so if the whole cell surface is to be represented, a Z-stack of images for each fluorophore is required, limiting throughput.

Recently, methods solely relying on oligonucleotide sequences to image biological samples have been demonstrated. Sometimes referred to as “DNA microscopy,” these methods rely on the incorporation of DNA tags that can be decoded to reveal both biomolecule identity and position within the biological sample. These methods offer possibilities to circumvent the limitations in multiplexing, throughput, and (potentially) resolution that beset optical imaging–based methods.

Pixelgen Technologies has developed Molecular Pixelation (MPX) technology to unlock a new spatial dimension to single-cell proteomics research by supplementing abundance information with spatial information about target proteins. This added spatial dimension provides researchers with opportunities to gain deeper insights into cell function at sub-cellular resolution.

The MPX protocol can be performed using standard molecular biology laboratory equipment, without the need for any dedicated hardware or consumables to compartmentalize cells, and a dedicated data processing pipeline is available for DNA processing and analysis of the sequencing output. The reagent kit contains an 80-plex panel against cell surface receptor targets on the major types of peripheral blood mononuclear cells (PBMCs)—T cells, B cells, natural killer cells, and monocytes—and allows for sequencing of up to 1,000 cells per sample and a total of eight samples per reagent kit. Dedicated data processing software tools are available for straightforward data processing and analysis of the rich data that the technology generates.

MPX workflow overview

The MPX workflow can be divided into six steps: a cell preparation step, two pixelation steps, an NGS preparation step, an NGS step, and an analysis step (Figure 1). During the cell preparation step, the immune cells in suspension are chemically fixed with paraformaldehyde to lock the surface proteins in place and prevent any reorganization during downstream sample processing. The fixed cells are blocked, and a target panel of 80 antibody-oligonucleotide conjugates (AOCs) is added, whereupon the AOCs bind their surface receptor targets. Next, the pixelation steps consist of serially hybridizing a set of so-called DNA pixels to the oligonucleotide portion of AOCs bound to cells. DNA pixels are single-stranded DNA molecules produced by rolling circle amplification, where each unique DNA pixel molecule contains repeats of a unique sequence identifier. Each DNA pixel molecule can hybridize to multiple AOCs in proximity on the cell surface.

The DNA pixel identifier sequence is then incorporated onto the hybridized AOC via a gap-fill ligation enzymatic reaction, forming about 1,000 neighborhoods on the cell surface where all AOC molecules within each neighborhood now share the same DNA pixel identifier sequence. The hybridization and gap-fill ligation reactions are then repeated for a total of two pixelation steps, thereby creating two sets of partially overlapping neighborhoods across the cell surface of each assayed cell.

Each generated amplicon contains a protein identifier barcode, a unique molecular identifier sequence, two DNA pixel identifier sequences, and PCR primer sites. The generated amplicons are finally amplified by PCR, purified, and quantified for Illumina sequencing.

Data processing and spatial inference

In short, the dedicated data processing pipeline, which is called Pixelator, receives the sequencing reads and subjects them to quality filtering, decoding (to establish protein identities), error correction, and consolidation (to collapse identical reads into unique sequences). Each sequenced unique molecule can be represented as an edge (link) of a graph (network) with the DNA pixel identifier sequences as nodes and the protein identity tags as edge or node attributes. Separated “cell graphs” representing individual cells are contained within the sample-level graph generated from a sequenced sample.

Spatial inference of the relative locations of individual AOC molecules is possible by interrogating the relative positions of the AOCs within each cell graph. This also allows for the calculation of spatial metrics such as the degree of clustering (polarity) of each of the 80 protein targets, or the level of colocalization between pairs of protein targets.

Results

Data analysis of protein abundance can be performed on MPX data similarly to other multiplexed single-cell methods. For example, PBMCs taken from a healthy donor were processed through the MPX protocol, and then a uniform manifold approximation and projection (UMAP) dimensionality reduction was performed on the protein count matrix output, which formed separated clusters that were consistent with the expected protein signatures for the major cell types expected in the PBMC samples (Figure 2). The fraction of each cell type was also consistent with expected fractions seen in healthy PBMC donors.

Figure 2. UMAP visualization of MPX count data from a PBMC sample. The observed clusters contain count signatures consistent with expected cell subpopulations within a PBMC sample. The pie chart indicates the fraction of all cells for each cluster.

To demonstrate the added spatial dimension of the data, Raji B cells were treated with an AOC of the CD20 therapeutic antibody drug rituximab before fixation and processing of the treated cells and untreated control cells through the protocol. Rituximab is known to cluster CD20 on B cells, which should then be reflected in the rituximab polarity score output of data.

The clustering of CD20 occurring upon rituximab AOC treatment was confirmed with fluorescence microscopy (Figure 3). Polarity scores for rituximab depicting the degree of clustered protein expression were compared between stimulated and control samples, and they showed a significant elevation of polarity scores for rituximab-treated cells. Additionally, graph representations of individual rituximab-treated cells, colored by the count density of rituximab of each node, showed a clustered expression pattern consistent with microscopy validation.

Figure 3. Polarity scores of rituximab-treated and -untreated Raji cells (left). Polarity scores were significantly elevated for rituximab treated cells, suggesting a clustered protein expression. Fluorescence microscopy validation confirmed the presence of clustered protein expression for rituximab-treated cells (middle). A heatmap of rituximab count density from a representative cell graph of a stimulated sample shows a clustered expression pattern (right).

Conclusion

Unlocking a new spatial dimension to single-cell proteomics research at high multiplex and throughput can enable researchers to gain additional and deeper insights into immune cell function at scale. Example data from Pixelgen Technologies’ MPX technology showcases the ability to detect differential spatial clustering of a target protein confirmed to be clustered upon stimulation with rituximab.

Filip Karlsson is co-founder and chief technology officer of Pixelgen Technologies. Website: www.pixelgen.com.

The post Single-Cell Spatial Proteomics by Molecular Pixelation appeared first on GEN - Genetic Engineering and Biotechnology News.

]]>
Label-Free Cell Analysis with Laser Force Cytology https://www.genengnews.com/resources/tutorials/label-free-cell-analysis-with-laser-force-cytology/ Sun, 13 Aug 2023 16:59:59 +0000 https://www.genengnews.com/?p=269609 Real-time process analytical technology for accelerated biologics development and improved manufacturing consistency.

The post Label-Free Cell Analysis with Laser Force Cytology appeared first on GEN - Genetic Engineering and Biotechnology News.

]]>
By Colin Hebert, PhD, Mina Elahy, PhD, Sean Hart, PhD, Jonathan Turner, PhD, and Renee Hart

The overall process for developing and manufacturing vaccines and cell and gene therapies (CGTs) is challenging and resource intensive because it involves complex and variable raw materials, demanding bioprocessing procedures, and sensitive final products. The adoption of robust analytical technologies to enable rapid process development and ensure manufacturing quality and consistency is a key component to failure-proofing biologics license applications.

However, many current analytical methods, especially for vaccines and CGTs, face challenges in terms of speed, reproducibility, and resource requirements, driving up costs and development times. Advanced bioanalytics are becoming a vital part of a successful quality by design (QbD) biomanufacturing program, where accurate and precise real-time data enable improved production consistency and product quality.

Implementing process analytical technology (PAT)­—where real-time data, including critical quality attributes (CQAs) and critical process parameters (CPPs), can be comprehensively and proactively monitored and analyzed—allows for advanced process controls and the development of dynamic and robust processes.

A real-time label-free PAT

Laser Force Cytology™ (LFC™), a novel label-free technology, applies optical and hydrodynamic forces to single cells to measure their intrinsic biophysical and biochemical properties without the use of dyes, antibodies, or fluorescent labels.1 These optical force properties, including refractive index, change with a wide variety of biological phenomena, including cell health conditions, activation, transfection, cell differentiation, and viral infection.

LFC measures subtle early indicators of phenotypic changes and differences in a sensitive and rapid manner, enabling both in-process analytics as well as offline release and potency assays to ensure consistent product quality and yields.2-4 For example, LFC can provide a coefficient of variation as low as 14% when measuring adeno-associated virus (AAV) transduction.

In this tutorial, select LFC applications illustrate the benefit of real-time optical force data to monitor stem cell differentiation, AAV production via transfection, and live virus vaccine potency. In contrast to many assays that are laborious, slow, and unreliable and thus not suitable as PAT methods, LFC provides accurate, precise, and sensitive results in minutes, demonstrating its key role within QbD programs.

Label-free stem cell differentiation monitoring

Antibodies have wide applicability as analytical tools, including phenotypic cell characterization, protein detection/quantification, and protein separation. However, antibodies are not without their drawbacks, and in many cases, a label-free approach is advantageous. Upon binding to a cell, antibodies can alter its activation state. Consequently, the expression of surface markers is not always consistent, and the analytical results may be affected.

A label-free approach instead allows the cell to be measured in its native state. Antibody sensitivity and specificity can vary based on the target antigen, population diversity, and manufacturing lot, creating false positives and false negatives as well as difficulties in repeating the results of antibody-based studies.5 Antibodies also require prior knowledge and the availability of a specific cell surface marker, preventing a priori discovery of unknown changes or differences among cells. In contrast, a label-free approach can make unbiased and universal measurements that are unaffected by lot-to-lot variability.

Finally, antibodies typically require significant time, cost, and labor to implement, making them resource intensive and unamendable for use in PAT methods. LFC provides label-free analysis with minimal sample perturbation and rapid time to result (minutes). One example is monitoring the differentiation of stem cells. LFC data illustrating the differentiation of human bone marrow–derived mesenchymal stem cells (hBM-MSCs) into either osteoblasts or adipocytes are shown in Figure 1.

Figure 1. Principal Component Analysis of Cell Populations. Cell samples, either undifferentiated or directed toward osteogenic or adipogenic lineages, were analyzed using LFC at the indicated time points. Performing principal component analysis with the population average and standard deviation of each of the LFC metrics allowed the changes between the time points and lineages to be visualized.

hBM-MSCs were measured using LFC prior to differentiation and then compared to samples harvested at 7, 14, and 21 days post differentiation for both of the pathways using principal component analysis (PCA).

PCA was used to refactor population wide data from multiple LFC parameters into principal components 1 and 2. In Figure 1, changes are shown for both lineages when compared to undifferentiated cells, with the adipogenic samples showing similar results at days 14 and 21, indicating that differentiation has likely stopped, while the osteogenic samples continue to progress through day 21.

This demonstrates the capability for LFC to monitor differentiation in a label-free manner, providing rapid and sensitive results to inform process development and manufacturing. The ability to quickly obtain nonsubjective results that track differentiation enables real-time process control beyond simple viability and proliferation, without the burden and bias of antibody-based labels.

AAV transfection reagent optimization

The production of viral vectors such as lentivirus and AAV is typically an integral part of the development and manufacturing of advanced therapies such as chimeric antigen receptor T-cell therapies and gene therapies. However, the use of viral vectors faces several challenges related to their development and manufacturing, from characterization, to quantification, to downstream purification.6,7

Manufacturing, in particular, is challenging when it comes to consistently maintaining high purity, potency, and safety while also focusing on cost controls that are acceptable for large-scale manufacturing.8

One of the most common methods of production for both lentivirus and AAV vectors is the use of transient transfection in human (HEK293) cells.9 Current tools to monitor and quantify CQAs such as viral titer during the transfection process are labor intensive and tedious, reducing the speed and efficiency of process development and the ability to monitor in real time.

Shown in Figure 2 are results from a collaboration between Catalent Biologics and LumaCyte to compare AAV vector production using three different transfection reagents, using both LFC and a digital droplet PCR (ddPCR)-based viral genome assay.10 Transfection complexes were prepared with DNA and with each of the reagents, and then they were added to HEK293 cells.

Figure 2. Velocity Histograms Comparing Control HEK293 Cells to Cell Populations Transfected with AAV Production Plasmids Using Three Different Transfection Reagents. Transfection resulted in a clear difference between each population and the control as well as differences between each of the reagents. The percentage of cells in the population with a velocity below 2,400 µm/s is shown numerically and graphically for the control and each reagent. Velocity is proportional to optical force.

At 72 h post transfection, cells were harvested and analyzed using LFC and compared to untransfected cells growing in parallel. Figure 2 shows single-cell histograms for each of the reagents compared to the control. For all reagents, the transfection resulted in a broadening of the velocity distribution, indicating an increased population heterogeneity. In addition, the percentage of low-velocity cells increased in the transfected samples, and by defining a velocity threshold of 2,400 µm/s, it was possible for the performance of the reagents to be compared.

As shown in Figure 2, reagent 3 (TR#3) showed the largest response, followed by reagent 1 and reagent 2, respectively.

When velocity data were used, a strong correlation was generated between the LFC measurements, which are available in near real time, and the ddPCR results, which take significant time and labor, demonstrating the strong utility of LFC for rapid process monitoring to improve speed of process development and optimization and ensure manufacturing consistency.

Additional applications of LFC throughout the AAV vector production process include adventitious agent monitoring to rapidly detect potential contamination as well as cell line characterization during process development and scaleup.

Live-virus vaccine production monitoring

The quantification and characterization of viral-based manufacturing processes is an essential component of the production of numerous classes of products, including viral vector vaccines, oncolytic viruses, and live virus vaccines (LVVs). In the case of LVVs, the potency or infectivity is typically the most critical measurement of efficacy. Therefore, real-time potency information from a PAT is extremely desirable during process development and manufacturing. It can increase process knowledge, improve yields, and ensure consistency.

However, existing methods to measure viral potency include the plaque assay and the endpoint dilution assay (50% tissue culture infectious dose, or TCID50), both of which suffer from high variability and long lead times. Thus, they are not capable of serving as PATs. A recent study by McCracken et al.2 detailed the use of LFC as a real-time PAT platform to measure LVV potency as well as detect the presence of adventitious viruses.
In one aspect of the study, Vero cells were seeded onto microcarriers, incubated to allow the cells to become confluent, and then infected with attenuated measles virus. At each time point post infection across multiple independent experiments, a sample was withdrawn from the bioreactor and separated into two fractions.

The first contained the microcarriers with cells attached, whereas the second contained any supernatant cells that had detached from the microcarriers. Cell samples were prepared from both fractions and then analyzed using LFC.

In parallel, supernatant samples were analyzed for viral potency using flow virometry as a surrogate measurement for TCID50. Although flow virometry is a physical measurement rather than infectious titer, it was used as an approximate correlation to the TCID50-based potency assay for measles virus during production.

However, should the ratio of total to infectious particles change due to some undetected process perturbation, a cell-based PAT such as LFC would reflect this change while a physical measurement, such as flow virometry, would not.

As shown in Figure 3, a strong correlation was developed between the potency per viable cell and the Radiance infection metric, defined as the percentage of cells with an optical force index greater than 55 s–1. With this correlation, the absolute average log10 difference between the estimated potency and LFC measurements is 0.074, demonstrating an excellent fit to the data.

Figure 3. Correlation between Radiance® Infection Metric and Estimated Potency. The population-wide correlation between Radiance data and estimated potency was determined on a per viable cell basis as measured by the total virus particles. Radiance data include contributions from both microcarrier and supernatant fractions of bioreactor samples collected during viral production using Vero cells. Each point represents the time point, and each experiment is indicated on the plot.

Once established, this correlation can be then used to calculate the titer of future production samples in minutes using the LFC data.

Using LFC as a rapid PAT for monitoring potency as well as an analytical assay for measuring infectious titer helps pave the way for reducing the research, development, and manufacturing timeline for LVVs as well as other vaccines that rely on viruses during their development and manufacturing process, including protein subunit vaccines produced in Sf9 cells via baculovirus- or adenovirus-based viral vector vaccines.

The capability to make rapid and precise cell-based infectivity measurements has the potential to improve the entire vaccine development life cycle from R&D to clinical trials and manufacturing, reducing the cost and time associated with LVVs and other viral vaccines.

 

References
1. Hebert, C.G., et al., Rapid quantification of vesicular stomatitis virus in Vero cells using Laser Force Cytology™. Vaccine, 2018. 36(41): p. 6061-6069.
2. McCracken, R., et al., Rapid In-Process Measurement of Live Virus Vaccine Potency Using Laser Force Cytology™: Paving the Way for Rapid Vaccine Development. Vaccines (Basel), 2022. 10(10).
3. Hebert, C.G., et al., Viral Infectivity Quantification and Neutralization Assays Using Laser Force Cytology™, in Vaccine Delivery Technology: Methods and Protocols, B.A. Pfeifer and A. Hill, Editors. 2021, Springer US: New York, NY. p. 575-585.
4. Bommareddy, P.K., et al., MEK inhibition enhances oncolytic virus immunotherapy through increased tumor cell killing and T cell activation. Sci Transl Med, 2018. 10(471).
5. Baker, M., Reproducibility crisis: Blame it on the antibodies. Nature, 2015. 521(7552): p. 274-276.
6. Clement, N. and J.C. Grieger, Manufacturing of recombinant adeno-associated viral vectors for clinical trials. Mol Ther Methods Clin Dev, 2016. 3: p. 16002.
7. van der Loo, J.C. and J.F. Wright, Progress and challenges in viral vector manufacturing. Hum Mol Genet, 2016. 25(R1): p. R42-R52.
8. Wright, J.F., Transient transfection methods for clinical adeno-associated viral vector production. Hum Gene Ther, 2009. 20(7): p. 698-706.
9. Matsushita, T., et al., Adeno-associated virus vectors can be efficiently produced without helper virus. Gene Ther, 1998. 5(7): p. 938-45.
10. LumaCyte. Radiance® Label-Free Monitoring of AAV Transfection in HEK293 Cells Using Laser Force Cytology™ (LFCTM). June 6th 2023]

 

All the authors work at LumaCyte. Colin Hebert, PhD, is senior vice president, scientific and business operations. Mina Elahy, PhD, is a senior application scientist. Sean Hart, PhD, is CEO and CSO; Jonathan Turner, PhD, is an application scientist. Renee Hart is president and CBO.

The post Label-Free Cell Analysis with Laser Force Cytology appeared first on GEN - Genetic Engineering and Biotechnology News.

]]>
Higher Throughput, More Flexible Single-Cell Multiomics Analysis https://www.genengnews.com/topics/omics/higher-throughput-more-flexible-single-cell-multiomics-analysis/ Wed, 28 Jun 2023 15:34:39 +0000 https://www.genengnews.com/?p=267022 BD Biosciences describes a platform that can isolate, barcode, and analyze single cells at high throughput without sacrificing sample integrity.

The post Higher Throughput, More Flexible Single-Cell Multiomics Analysis appeared first on GEN - Genetic Engineering and Biotechnology News.

]]>
By Aruna Ayer, PhD, and Cynthia Sakofsky, PhD

Single-cell multiomics provides a comprehensive view of the cellular heterogeneity and the complex interplay between multiple layers of cellular omics, namely genome, epigenome, transcriptome, and proteome. Researchers are increasingly applying single-cell multiomics to unveil cellular complexity across many fields, including cancer research, drug discovery, infectious disease research, and more.

With more affordable next-generation sequencing options, single-cell multiomics will increasingly be used as a common approach to profiling cells and tissues. Understandably, the number of single-cell technologies has also rapidly grown in recent years and includes droplet- and microwell-based platforms, as well as microfluidics-free and instrument-free single-cell workflows.

Even with the myriad options for single-cell assays that these technologies provide, researchers are often challenged with technical issues that can impact biological outcomes, such as batch effects or experimental run-to-run variability, sample loss, low cell capture rate, high cell multiplet rate, and high background noise, to name a few. Hence, there is a need for a single-cell platform that can overcome these challenges and provide an accurate and reproducible representation of cellular information in diverse samples, while also providing higher and flexible throughput that can provide cost savings and boost experimental efficiency.

A system for single-cell multiomics analysis

The high-throughput BD Rhapsody™ HT Xpress System leverages a microwell-based single-cell partitioning technology with minimal benchtop equipment to perform single-cell analysis. With no fluidic pumps or microfluidics, there is no clogging of channels with precious samples. The product features a flexible eight-lane cartridge design with the ability to run up to eight times the number of lanes as the on-market BD Rhapsody Express System with similar workflow time and performance.

With the eight-lane cartridge, one can process up to half a million cells per cartridge. Additionally, in combination with the new BD Flex Single-Cell Multiplexing Kit (SMK), a user can run up to 24 samples per lane or up to 192 samples per cartridge. Single-cell data obtained from each lane is reproducible and concordant between lanes and cartridges, with no lane-to-lane contamination within a cartridge.

Figure 1. A t-SNE plot generated for a sample that was loaded at 5,000 cells per lane versus 65,000 cells per lane shows no batch effect between lanes at varying cell input (top left). Differential gene expression correlations between 5,000 and 65,000 cell loads on different lanes show high concordance (top right). A bar graph shows the percentages of expected sample tags identified in different lanes—results that demonstrate absence of lane-to-lane contamination (bottom).

A demonstration of system capabilities

Samples stained with different sample tags from an SMK showed no unexpected sample tag data in neighboring lanes (Figure 1). The HT Xpress also features partial use of the cartridge, allowing users to run the unused lanes of the same cartridge at a different time for up to four months.

The HT Xpress offers cell retention of samples with minimal cell loss. Cell capture rates are typically >80%, even with cells of varying sizes and fragility, including neutrophils, T cells, and natural killer cells, as well as nuclei. The system enables the introduction of cells into a cartridge and allows them to settle into individual microwells by gravity. This process results in minimal cell manipulation and assures cell and mRNA integrity.

The stochastic pattern of cells falling into microwells follows a Poisson distribution, which can be used to theoretically estimate the number of wells that contain more than one cell, that is, a multiplet. When a multiplet occurs, the transcriptomes and/or proteomes of two or more cells are captured on a single barcoded bead simultaneously, rendering the data obtained from these cells unusable since the individual cell information cannot be deconvoluted.

The HT Xpress includes a scanner component, that can visualize cells and beads in the wells, providing an empirical estimate of multiplet rates. There is high concordance between both theoretical and scanner estimates of multiplets, even at high cell inputs. In fact, with cell inputs close to 60,000 per lane, scanner multiplet rates have been shown to be reproducibly <10% (Figure 2).

The scanner used in the HT Xpress requires just a field upgrade to the Rhapsody scanner. The scanner enables not only empirical multiplet rate estimates, but also visual quality control (QC) of cell viability and step-by-step QC metrics of the cartridge workflow, including cell and bead capture, wash steps, and bead retention rates.

Figure 2. Varying cell loads (100 to 65,000 cells) in the different lanes of the eight-lane cartridge demonstrate a predictable trend in cell capture and cell multiplet rates.

Implications for scientific discovery

Such a visual in-process QC after single-cell partitioning allows users to make more informed decisions about their experiments prior to library preparation and sequencing, which can save a user a substantial amount of money. Additionally, the biomolecules captured on the beads can be stored for up to four months after the cDNA conversion step, allowing users more flexibility to subsample beads for initial shallow sequencing to evaluate library quality and performance metrics before deciding to potentially proceed with further sequencing of the entire experiment.

The HT Xpress fits within an end-to-end single-cell multiomics solution that is supported by BD Rhapsody single-cell multiomics assay kits and the BD Rhapsody Analysis Bioinformatic Pipeline tool. The pipeline tool generates detailed output files that can be used for comprehensive secondary analyses.

The latest pipeline version also automatically generates a sharable HTML file that highlights QC and summary metrics, with an additional interactive t-distributed stochastic neighbor embedding (t-SNE) plot displaying features such as single bioproduct (gene or protein) expression data and immune cell-type calling information (Figure 3).

Figure 3. Example of the interactive portion of an HTML file showing a single bioproduct expression graph displaying a t-SNE on the left and a histogram on the right for individual bioproducts (top), and an immune-cell-type experimental graph showing a t-SNE plot with each cell annotated based on a cell-type prediction algorithm (bottom).

Aruna Ayer, PhD, is a senior director, and Cynthia Sakofsky, PhD, serves as a staff scientist at BD Biosciences. To learn more about the BD Rhapsody single-cell multiomics solutions, visit bdbiosciences.com.

The post Higher Throughput, More Flexible Single-Cell Multiomics Analysis appeared first on GEN - Genetic Engineering and Biotechnology News.

]]>
High-Plex Spatial Signatures Can Predict Responses to Immunotherapies https://www.genengnews.com/resources/high-plex-spatial-signatures-can-predict-responses-to-immunotherapies/ https://www.genengnews.com/resources/high-plex-spatial-signatures-can-predict-responses-to-immunotherapies/#comments Tue, 04 Apr 2023 10:51:54 +0000 https://www.genengnews.com/?p=223435 Akoya Biosciences and OracleBio describe how PhenoCode Signature Panels can be used to develop prognostic biomarkers and facilitate translational and clinical research.

The post High-Plex Spatial Signatures Can Predict Responses to Immunotherapies appeared first on GEN - Genetic Engineering and Biotechnology News.

]]>
By Bethany Remeniuk, PhD, Nicole Couper, and Yi Zheng, PhD

Combination therapies, which target multiple immune checkpoints at once, have the potential to deliver improved outcomes for cancer patients. The development of clinically relevant predictive tools to stratify immune checkpoint inhibition responders from nonresponders will be critical for the advancement of such treatments.

Multiplexed cell and spatial phenotyping of the tumor microenvironment (TME) can provide a deeper understanding of complex interactions between tumors and the immune system, setting the stage for improved patient stratification. Spatial biology provides advantages over other technologies by revealing a clearer and more detailed picture of cellular- and protein-level co-expression, localization, and arrangements within the TME, which in turn can be used to develop prognostic biomarkers called spatial signatures based on the spatial distribution of certain phenotypic features.

PhenoCode™ Signature Panels simplify spatial biomarker assay development and validation when used in combination with Akoya’s PhenoImager® platform. Each of the customizable multiplex panels include key markers for comprehensive mapping of the TME and immune status, providing a rapid, quantitative, end-to-end spatial phenotyping workflow.

Fast and flexible PhenoCode Signature Panels

PhenoCode Signature Panels are designed in a customizable format, allowing easy integration of an additional immune cell or checkpoint marker to a preset five-plex panel (Figure 1). Each panel focuses on distinct areas of tumor biology and potential response to therapy that are of greatest interest to translational and clinical researchers. These multiplex immunofluorescence panels combine Akoya’s patented barcode chemistry with Opal-tyramide signal amplification, providing similar accuracy and sensitivity to the gold-standard chromogenic immunohistochemistry. The time required to develop and validate new spatial signatures using the panels is reduced three-fold compared to conventional assay development.

Figure 1. PhenoCode Signature Panels provide inherent flexibility that allows for the rapid and systematic analysis of the TME, requiring minimal assay development and optimization.

The panels are used in a seven-step procedure to answer key questions and interrogate the TME. The first three steps follow the traditional formal-fixed paraffin-embedded sample preparation, with steps one through three corresponding to slide preparation 
(baking and dewaxing), antigen epitope retrieval, and blocking. In the fourth step, the slides are stained with a primary antibody cocktail in which each antibody has been conjugated to a given barcode.

In step five, a single antibody is revealed at a time, beginning with the hybridization of a complementary oligo barcode conjugated to horseradish peroxidase. Tyramide signal amplification is used in step six to amplify immunohistochemistry detection by covalently depositing an Opal fluorophore near the targeted antigen. Once signal amplification is complete, step seven begins. The horseradish peroxidase–conjugated oligo is dehybridized. Steps five, six, and seven are repeated for each antibody, labeling the markers with the different dyes until all have been revealed.

Spatial signatures for NSCLC

In the study outlined below, a PhenoCode Signature Immuno- Contexture Human Protein Panel was used to accelerate identification of spatial signatures in non-small cell lung cancer (NSCLC) that may reliably predict response to immune checkpoint inhibition.

NSCLC patients can have impaired immune responses within the TME, leading to tumor growth progression and poor prognosis. Accurate cell phenotyping combined with spatial phenotyping can provide a better understanding of complex cellular interactions underpinning the tumor-immune response.

PhenoCode Signature Panels and associated artificial intelligence (AI)-powered image analysis methods were used to identify populations of immune cells and their functional status, as well as their interactions within the TME in a set of NSCLC tissue cores from patients treated with first-line standard-of-care and second-line immuno-oncology treatment. Patient groups included responders (R—full responders, partial responders, and stable disease) and nonresponders (NR).

Formalin-fixed paraffin-embedded NSCLC tissue microarrays (TMAs), comprising n = 38 cores containing a range of carcinomas and pathological Tumor Node Metastasis (pTNM) stages, were stained using the PhenoCode Signature Immuno-Contexture Human Protein Panel. This panel includes markers for T cells (CD8 and FoxP3), macrophages (CD68), checkpoint inhibitors (PD-1 and PD-L1), and PanCK as a tumor marker.

Stained TMAs were scanned at 20× magnification on a PhenoImager HT multiplex imaging system. A total of 36 cores passed image QC and progressed to image analysis. Deep learning algorithms were developed to segment each core into tumor and stroma regions of interest (ROIs) and to accurately detect and classify different cell populations. A DeepLabv3+ neural network was used to develop the classifier using DAPI and PanCK. A customized cell analysis algorithm was trained using a U-Net neural network to detect individual cell lineages and subsequent phenotypes of interest.

A hierarchical approach detected CD8, CD68, tumor cells, and then DAPI cells. Staining variance for CD8, CD68, and DAPI was overcome by generating training labels for the three cell types across the TMA cores and using the three markers as input channels for deep learning training. Spatial analysis was performed using an OracleBio proprietary program to calculate readouts for mean nearest neighbor distances between cell populations, as well as neighborhood analysis for selected phenotypes (Figure 2).

Immune cell counts, phenotypes and spatial interactions were identified within the tumor and stroma ROI per core. Data included total and negative cell phenotype counts, cell density in tumor and stroma, as well as average cell distances between specified phenotypes and neighboring spatial interactions in each of the 36 cores in the TMA set.

Immune cell subsets quantified included FoxP3+, CD8+/PD-1+, CD8+/FoxP3+/PD-1+, and CD68+/PD-L1+. Tumor cells of interest included PanCK+ and PanCK+/PD-L1+. 
Results indicated single FoxP3 per mm2 was significantly lower in the tumor ROI of the R group vs. the NR group (p ≤ .05). A trend was observed in the ratio between CD8 (single and PD-1 dual combined populations) and FoxP3 (single population), where there was a higher proportion of CD8 phenotypes in both tumor ROI and stroma ROI of the R group vs. the NR group. Spatial interactions between phenotypes varied across individual cores, and although trends were observed, no significant differences were found between the R group and the NR group (Figure 2).

Figure 2. Neighborhood analysis performed across TMA cores. Examples include a core from the responder group (top row) and a core from the nonresponder group (bottom row). (A & B) Tumor/stroma segmentation. (C & D) Connectivity graph for neighboring cells within a 20 μm radius (red: tumor cells; purple: immune cells; green: macrophages). (E) Example phenotype (phenotype average distance within region of interest). (F) Immune, tumor cell, and macrophage phenotype interactions (within a 20 μm radius).

The combination of high-quality, spatial phenotyping data provided by the PhenoCode Signature Panel, coupled with deep learning quantitative image analysis techniques, enabled detailed characterization of the complex cellular interactions, at both the functional and spatial levels, within the TME of immuno-oncology-treated NSCLC tissue.

Conclusion

Biomarker discovery based on spatial biology establishes a path toward the use of multiplexed imaging in the clinic, as technologies and workflows become more practical, high throughput, and analytically robust. PhenoCode Signature Panels provide an off-the-shelf, flexible six-plex option that allows more thorough interrogation of the TME with minimal user development requirements.

The ability to deploy signature panels supported by PhenoCode chemistry can accelerate the identification of spatial signatures with the potential to reliably predict response to immune checkpoint inhibition therapy in clinical trials.

Bethany Remeniuk, PhD, is the associate director of laboratory applications at Akoya Biosciences. Nicole Couper is a deputy clinical operations manager at OracleBio. Yi Zheng, PhD, is a director of reagent development at Akoya Biosciences.

The post High-Plex Spatial Signatures Can Predict Responses to Immunotherapies appeared first on GEN - Genetic Engineering and Biotechnology News.

]]>
https://www.genengnews.com/resources/high-plex-spatial-signatures-can-predict-responses-to-immunotherapies/feed/ 21
Single-Use Pumps in Biopharmaceutical Manufacturing https://www.genengnews.com/resources/single-use-pumps-in-biopharmaceutical-manufacturing/ https://www.genengnews.com/resources/single-use-pumps-in-biopharmaceutical-manufacturing/#comments Thu, 02 Mar 2023 11:54:05 +0000 https://liebertgen.wpengine.com/news/single-use-pumps-in-biopharmaceutical-manufacturing/ PSG describes the Quattroflow EZ-Set, a pump chamber replacement system that can reduce downtime in the production changeover process.

The post Single-Use Pumps in Biopharmaceutical Manufacturing appeared first on GEN - Genetic Engineering and Biotechnology News.

]]>
By Andreas Frerix, PhD

Today’s most common biopharmaceutical manufacturing systems require the handling, transferring, processing, and purification of large-molecule drugs produced in living cells—cultured animal cells, bacterial cells, or yeast cells. These operations, which are performed with the materials in a liquid phase, require the use of pump technologies that can reliably provide volumetric consistency and accuracy, appropriate pressures and flow rates, and low pulsation (to sustain processing conditions), as well as low-shear, low-heat input and material compatibility (to protect the biological drug from being harmed).

PSG single pump
The development of single-use quaternary diaphragm pumps has been a critical advance for manufacturers of biopharmaceuticals. The image above shows a standard plastic pump chamber. The creation of a pump chamber replacing system from Quattroflow optimizes time, cost, reliability, and safety.

Traditionally, permanent stainless-steel pumping and processing systems have been used for upstream and downstream operations, but the time and costs involved in the operation, cleaning, maintenance, and quality control of a system to prepare for the next production run can become prohibitive. Such concerns led to the creation of single-use pumps that feature a disposable pump head and chamber that can be easily removed and replaced between production runs, eliminating the time and costs needed to revalidate the equipment in a stainless-steel system.

This technology, though, still required improvements to optimize changeover times and simplify installation processes. An important advance in this area came with the development of a pump chamber changing system that reduces the time needed to replace a disposable single-use pump chamber to seconds.

Before proceeding, we should take a step back and discuss what the terms “complex,” “precise,” and “pure” mean in the context of biopharmaceutical manufacturing. The foundation of biopharmaceutical manufacturing rests on various types of unit operations. Whereas each unit operation features its own set of operational criteria, all unit operations are alike in that they can produce a viable, contaminant-free drug suitable for human administration only if the manufacturer strictly adheres to an unbending set of operational parameters and structures.

Three of the more common unit operations in biopharmaceutical manufacturing are tangential flow filtration (TFF), chromatography, and virus filtration. TFF is also known as cross-flow filtration, a process in which the biopharmaceutical’s feed stream flows tangentially across the filter membrane at positive pressure. Whereas TFF is used to concentrate the product based on the molecule size, chromatography is a process that is used to purify target molecules from the other process stream based on adsorption to a resin. Virus filtration is used to ensure the safety of the drugs that are produced.
The common thread among these various types of unit operations is that each needs a pumping technology that can satisfy its specific operational parameters.

Single-use solution

Some additional mention must also be given to the advantages that utilizing single-use quaternary diaphragm pumps in biopharmaceutical manufacturing can deliver. The main advantage for these pumps, whether used in traditional stainless-steel or single-use setups, is their unique form of operation.

The four quaternary diaphragms are driven one after another by a connector plate, which moves back and forth out of its central position in a stroke that is generated by an eccentric shaft, with the length of the stroke determined by the angle of the eccentricity. The four pumping chambers, which actually operate in the same way the human heart does, keep the product flow constantly moving forward in a volumetrically consistent low-shear and low-pulse manner.

The pump’s chambers also contain no rotating parts that can be subject to friction, meaning that there is minimum heat buildup that can compromise the product. This mode of operation means that the pumps can run dry, are self-priming, and produce minimal shear because of low slip. In addition, they offer low-pulsation, leak-free operation while having great dry/wet suction-lift capabilities.

The quaternary diaphragm pump is also adaptable to single-use production configurations. A single-use pump enables biopharmaceutical manufacturers to essentially eliminate the oftentimes prohibitive costs of cleaning and validating pumps and systems. The result is a quicker and more cost-effective production process, and one that still delivers preferred levels of product purity and sterility with no chance for cross-batch or cross-product contamination.

The fulcrum of the single-use pump is its product-wetted plastic pump chamber. This
chamber can be replaced as a complete unit.

Next step forward

Quattroflow eZset
The EZ-Set Pump
Chamber Replacing
System from Quattroflow
allows single-use quaternary
diaphragm pump chambers to be
replaced in as little as 30 seconds
without the need for special tools or
torque wrenches. The result is decreased downtime during product changeovers and an improved production process.

Although single-use pump technology succeeded in reducing the time and costs associated with cleaning and revalidation after production runs, there was still interest in further reducing the time needed for pump head replacement. The breakthrough came with the development and release of a pump chamber replacement system, specifically, the Quattroflow EZ-Set.

The system reduces downtime in the production changeover process. It allows manufacturers to replace a single-use pump chamber in 30 seconds or less without the need of torque wrenches or other special tools and equipment—all while allowing the user to wear rubber gloves during the replacement process.

Pump chamber replacing systems can also be retrofitted on existing motor drives, which also makes upgrades quick and easy to perform.

The five steps that are needed to replace the pump chamber
are as follows:

• Remove the pump’s pressure plate.
• Take the pump chamber out of the ring drive.
• Push the new pump chamber onto the ring drive.
• Reinstall the pressure plate.
• Slightly rotate the pressure plate to lock.

Conclusion

Many skills are needed to produce biopharmaceuticals, but in the end, the final product must be one that is unquestionably safe to use while simultaneously allowing the manufacturer to reap the financial benefits of an optimized patent window. The arrival of single-use pumps on the scene has virtually eliminated the cost and downtime that were previously required to clean and validate pumping systems.

Andreas Frerix, PhD (andreas.frerix@psgdover.com), is the product management director for Quattroflow at PSG, a Dover company. Website: www.psgdover.com.

The post Single-Use Pumps in Biopharmaceutical Manufacturing appeared first on GEN - Genetic Engineering and Biotechnology News.

]]>
https://www.genengnews.com/resources/single-use-pumps-in-biopharmaceutical-manufacturing/feed/ 19
Maximizing the Impact of 1D and 2D Image Analysis https://www.genengnews.com/resources/maximizing-the-impact-of-1d-and-2d-image-analysis/ https://www.genengnews.com/resources/maximizing-the-impact-of-1d-and-2d-image-analysis/#comments Wed, 07 Dec 2022 11:54:08 +0000 https://liebertgen.wpengine.com/?p=213791 TotalLab offers image processing tips and argues that image analysis software is an integral part of gel-based investigations.

The post Maximizing the Impact of 1D and 2D Image Analysis appeared first on GEN - Genetic Engineering and Biotechnology News.

]]>
When performing 1D and 2D gel electrophoresis within your laboratory, you doubtlessly pay a great deal of attention to your experimental design, sample preparation, and technical execution, likely in accordance with a detailed standard operating procedure. However, can you say you afford the same degree of care to your image analysis process? You should. After all, this process is where all your quantifiable data is generated. In a sense, your data doesn’t come from your experiment, but from your image of your experiment.

TotalLab has been creating 1D and 2D image analysis software for the life sciences for over 20 years. During that time, we’ve gained a wealth of experience not only in bridging the gap between captured images and reliable data, but also in helping scientists and medical professionals acquire and derive value from the highest-quality images possible.

Image quality concerns

Among the factors that harm image quality (and therefore data quality), the most common is image compression. The second most common is bit-depth. We will consider each of these factors from a software and image data perspective.

Image compression: We would always recommend that you avoid applying any compression to your captured image files. Compression reduces file size by stripping data from an image. Even if there is no change in the image to the naked eye, it is the data within the image (the raw pixel values embedded in the image) where your measurements are taken from. Any removal of data from your image files may negatively impact your analysis by actually removing data from your analysis. The most common action that allows compression to sneak into your workflow is the transfer of an image from your capture device (gel documentation system, densitometer, etc.) to your analysis software or to another computer. When you export images from capture software, you should always select “export for analysis” rather than “export for publication.” And you should definitely reject the idea of just taking a screenshot of the image!

Within our software, we analyze images in grayscale format, the reason being that in grayscale images, the value of each pixel represents only the intensity of the light (and, therefore, the intensity of bands or spots). No pixel data storage space is used to store color information. False-color modes can be applied to your images to increase the contrast and help you identify fainter bands or spots. However, all of the measurements in the background are performed on the grayscale image. Most, if not all, modern gel documentation systems will allow you to export analysis images in grayscale format, which has become the industry standard.

It is advised to export the image in grayscale, rather than exporting a color image and then converting to grayscale, because a color image is divided between three different color channels—red, green, and blue (RGB). So, a 24-bit color image, which has 8 bits per channel, when converted to grayscale essentially becomes an 8-bit image with weighted averaging of the red, green, and blue channels. Hence, some pixel data could be lost or changed in the conversion process, and you may not get a true representation of your image reflected in the pixel data, where measurements are drawn from.

Low bit depth: The bit depth of an image (also referred to as color depth or pixel depth) is the number of bits used to represent each pixel in an image. In a 1-bit image, for example, each pixel is represented by just 1 bit. Such an image, then, can store only one binary value—1 or 0—which means that the image can only be either completely black or completely white.

Essentially, the bit depth is a direct measure of how much data can be stored in each individual pixel of an image, and it is this data that we read for use in our algorithms and measurements to generate results. The logical extension of this is that the greater the bit depth of an image, the more data is available for you to use and subsequently the greater the quality of your results. We recommend capturing your images in the highest bit depth settings available on your capture device, as this will allow you to use the greatest range of data and therefore the greatest sensitivity possible. The relationship between bit depth and sensitivity all comes down to the storage of binary data within pixels and how data is expressed visually in a digital image. The best way to understand this is with a diagram.

As the diagram in Figure 1 demonstrates, increasing the bit depth of an image actually produces an exponential increase in the number of values available for each pixel (and, therefore, levels of gray). As an example, an 8-bit grayscale image file can store 1 of 256 shades of gray in each pixel, but if you increase your bit depth to 16 bits, your image file now has 65,536 possible grayscale values to choose from for each pixel. This vastly increases the amount of data and level of quantitative accuracy.

Besides capturing the highest-bit-level tiff images possible, you can take a few additional actions during the capture and export steps to increase the quality of your images. These actions include ensuring that you’re using the highest resolution possible provided by your capture device. Typically, a minimum resolution of 300 dots per inch (DPI) or a pixel size below 100 microns will provide an image that is detailed enough for accurate analysis and small enough in file size for efficient processing.

Quality control checks before analysis

Once you’ve captured or received your image, but before you perform your analysis, it’s important to perform a quality control check to make sure the image is appropriate for analysis. A common problem in gel and blot images to check for is areas of saturation within the image. Most modern capture or analysis software will highlight this by default to the user, usually in a bright color. Saturation occurs when the exposure time of the gel is set too high during capture, that is, when the amount of light captured from that spot/band goes beyond the camera’s upper limit of detection.

This causes data loss as any signal above that cut-off point cannot be detected. For example, if you had a spot or band with a light intensity value of 1,000, and if your camera’s upper limit of detection was equal to 1,000, every spot or band with an intensity equal to or above 1,000 will show as being 1,000. The true intensity of those bands is lost, and the intensities of different bands cannot be quantified or compared within your experiment.

To avoid this, look at the setup of your device and make sure that your bands or spots are within the dynamic range of your preview window by reducing the exposure time. Alternatively, if you are using a laser-based system, you can fine-tune the voltage of the photomultiplier tube (PMT). The sweet spot for exposure time is when your bands or spots of interest are most visible but haven’t yet become saturated. For laser-based systems, your voltage should be set to as high as possible before saturation occurs. If you’re unsure of how best to change this, you should review your scannerdocumentation or consult your scanner supplier.

How to leverage software to save time in your laboratory

Although there are overlaps between the image capture and analysis advice given for 1D images and that given for 2D images, there are specific requirements involved when one is dealing with the spots in a 2D gel experiment instead of with the spots in a 1D gel experiment.

The specific requirements for 2D gels include two additional Figure 2A. Imaging software may be used to model 1D and 2D gels in three dimensions, improving analyses and generating striking images for publication. Figure 2B. This 3D image was generated with SpotMap, TotalLab’s host cell protein coverage analysis software. The software draws outlines accurately, and it produces 3D views that can be used to validate legacy software for host cell protein coverage workflows—one for image alignment, and one for spot counting and measurement. Both can take a considerable amount of an operator’s time. Software that enables users to automatically align 2D gel and blot images on top of each other and automatically detect spots can save a huge amount of time and also increase accuracy by reducing inter-operator variability.

Figure 2A. Imaging software may be used to model 1D and 2D gels in three dimensions, improving analyses and generating striking images for publication.
Figure 2B. This 3D image was generated with SpotMap, TotalLab’s host cell protein coverage analysis software. The software uses intuitive algorithms and an easy-to-use interface to enable users to obtain an accurate HCP coverage percentage in minutes.

Software that provides pixel-level auto-alignment (to account for the inherent geometric distortions involved in 2D gel electrophoresis) is capable of inserting hundreds of separate vectors for alignment in different parts of your images in approximately 10 seconds. To manually insert the same number of alignment vectors would take hours.

There are, of course, instances where manual intervention is preferred, and if this is the case, it’s important that your choice of software also provides intelligent manual features, such as giving users the ability to snap to spots between images. Other manual features that allow you to check the accuracy of your alignment are also important. An industry standard is the use of an image checkerboard, which is made up of alternating sections of your two images. When the edges of the squares line up with each other, your two images are aligned.

Enriching your data after your experiment

The right image analysis software can expand the information you can glean from your experiments and facilitate the visualization of relationships embedded in image data. For example, it can be used to model 1D and 2D gels in three dimensions and produce some stunning imagery for inclusion in publications and presentations (Figures 2A & 2B). Exercising image annotation abilities (which include the sharing of annotated images across laboratory users for further analysis and discussion) is a great way to ensure cross-collaboration (Figure 3).

Figure 3. Having the ability to annotate an image and share that annotated image across laboratory users facilitates cross collaboration.

Laboratories operating in a regulated environment don’t need to sacrifice these capabilities when they use image analysis software. They can opt for solutions that have secure sign-in, full audit trails, and electronic signatures. They can also gain the ability to prepare reports of this data to ensure compliance with 21 CFR part 11 regulations.

Confidence in the reproducibility of your results, across replicates and operators, can be ensured by using software that supports robust and simplified workflows. Most important, the software needs to have been designed to be easy to use. Such software can minimize the time spent training new users and prevent user error.

Although the quality of the data you extract from 1D gel, 2D gel, and immunoblotting experiments depends on your practical setup, the images you capture of your experiment are just as important. By ensuring that your choice of analysis software is well informed, and by investing in something that has been developed from the ground up with operators and laboratories in mind, you can ensure a consistently high level of image and data analysis across operators and gel documentation systems in your laboratory.

 

Steven Dodd, PhD, is head of sales and business development at TotalLab.

The post Maximizing the Impact of 1D and 2D Image Analysis appeared first on GEN - Genetic Engineering and Biotechnology News.

]]>
https://www.genengnews.com/resources/maximizing-the-impact-of-1d-and-2d-image-analysis/feed/ 20
Leveraging Modern Electronics to Streamline CRISPR Workflows https://www.genengnews.com/resources/leveraging-modern-electronics-to-streamline-crispr-workflows/ https://www.genengnews.com/resources/leveraging-modern-electronics-to-streamline-crispr-workflows/#comments Wed, 07 Dec 2022 11:53:26 +0000 https://liebertgen.wpengine.com/?p=213784 CRISPR QC describes how CRISPR Complete uses a chip-based biosensor to select CRISPR system components and optimize editing processes.

The post Leveraging Modern Electronics to Streamline CRISPR Workflows appeared first on GEN - Genetic Engineering and Biotechnology News.

]]>
By Kiana Aran, PhD

The adoption of CRISPR-Cas systems in therapeutics development has led to the rapid emergence of several clinical candidates for treating rare and challenging diseases, including sickle-cell anemia, β-thalassemia, Duchenne muscular dystrophy, and ocular diseases. Translation into the clinic has been facilitated by a robust preclinical framework of genome editing tools, empowering nearly any R&D team to streamline gRNA design and synthesis, select the most desirable Cas protein for its application, and characterize on- and off-target editing in live cells.

These tools form the foundation for a typical preclinical CRISPR editing workflow. In general, this workflow involves the following steps: 1) CRISPR-Cas system selection, 2) gRNA design (and synthesis, in the case of ribonucleoprotein transfection), 3) transfection into live cells, 4) single-cell cloning, 5) screening, and 6) on-/off target analysis. This basic workflow has been shown to be useful in a broad array of experimental conditions. It has accommodated many Cas-gRNA combinations, various transfection or cell culture conditions, and PCR- or NGS-based techniques for assessing on-/off-target editing.

While seemingly straightforward, this now-routine, entrenched workflow has its drawbacks. It can be labor-, time-, and cost-intensive. For example, difficulties can arise in gRNA design and selection, a task that has been eased by the introduction of several in silico algorithms and frameworks. These tools allow the manipulation of many parameters, including PAM positioning, GC content, secondary structures, and mismatches. Nonetheless, gRNAs that appear perfect on a computer often perform poorly, resulting in minimal editing or undesirable off-target effects in cells.

When problems arise in CRISPR editing workflows, researchers often resort to awkward and time-consuming troubleshooting approaches. These may involve focusing on in silico gRNA optimization and redesign, tweaking of transfection conditions, or switching strategies for Cas-gRNA delivery or expression. Furthermore, conventional workflows don’t include in vitro assays for Cas-gRNA complex formation or target binding, forcing many researchers to operate and optimize with incomplete information.

In vitro methods

In principle, the issues encountered in the traditional straight-to-cell approach can be addressed using in vitro methods, commonly used in small-molecule and biopharmaceutical development pipelines. Though not often used in CRISPR-Cas editing workflows, in vitro assays can provide indispensable insights into gRNA binding affinity, target or nontarget DNA binding affinity, and cleavage efficiency, bridging the growing knowledge gap between the in silico and cell-based worlds.

Currently, there are several reconstituted CRISPR-Cas systems for assessing in vitro activity. Gel-based cleavage assays help determine the cleavage efficiency of Cas-gRNA ribonucleoproteins (RNPs). However, these assays can be low in throughput and sensitivity, further slowing already cumbersome editing workflows. More rapid, sensitive, and specific assays have been developed. One is Sherlock Biosciences’ Specific High-sensitivity Enzymatic Reporter unlocking (SHERLOCK) technology. Another is Tolo Biotech’s one-HOur Low-cost Multipurpose highly Efficient System (HOLMES). SHERLOCK and HOLMES rely on isothermal T7 RNA polymerase-mediated and PCR-based amplification, respectively. Both assays use a fluorescent reporter to detect CRISPR-Cas cleavage.

By providing a better understanding of cleavage activity, such assays would be valuable to conventional CRISPR-Cas editing workflows. However, an ideal in vitro assay would also provide insight into other biochemical steps, such as Cas-gRNA complex formation and RNP recognition of target DNA sequences upstream of target cleavage. In addition, reliance on fluorescence-based assays and optical assays, in general, requires amplification, resulting in additional time, reagents, and instrumentation.

Chip-based methods

Recently, the detection of biological activities has gone through a transformation, trading optics-based methods for chip-based electrical methods. One of the chip-based electrical methods is embodied by the Biosignal Processing Unit (BPU), a technology developed by Cardea Bio. It facilitates rapid, highly sensitive translation of biological activity into digital information.

The BPU is a graphene transistor that can be functionalized with a wide range of biomolecules. Performing biochemical reactions (such as a binding interaction or catalysis) near the graphene surface of the BPU alters its electrical characteristics, resulting in real-time electrical signal output. All of this happens without amplification, which is usually required with optical methods. Because measurements are in real time, kinetics data can also be collected, providing robust insights into the molecular dynamics of DNA, RNA, protein, or any other type of biomolecule.

CRISPR-Chip

In 2019, we introduced CRISPR-Chip to the world, a biosensor that combines CRISPR-Cas with the BPU platform to detect target sequences of interest within a genomic DNA context (Figure 1). CRISPR-Chip allows researchers to understand and optimize CRISPR-Cas performance, including gRNA interaction, target recognition, and cleavage.

Figure 1. The CRISPR-Chip contains multiple transistors arranged into
three separate channels. One transistor is magnified to show how a
Cas protein can be linked to the surface of a graphene transistor.

The graphene surface of the BPU is functionalized with pyrenbutyric acid (PBA), which electrostatically interacts with the graphene and can be covalently coupled via carbodiimide crosslinking to any Cas protein. Unfunctionalized areas of the PBA are blocked by coupling to an inert molecule, amino-polyethylene glycol 5-alcohol, that doesn’t interfere with any downstream biochemical reaction or electrical detection.

Following the blocking reaction, gRNA can be introduced, allowing Cas-gRNA RNP formation near the surface of the graphene chip to be measured.Biological activity that occurs near the surface of the graphene results in a current between the drain and source electrodes—a current that may be distinguished from a baseline signal (Figure 1). Finally, additional reagents, such as PCR-amplified target DNA or reaction components, can be added to measure binding interactions or cleavage reactions.

CRISPR-Complete

CRISPR-Complete is a workflow that uses the CRISPR-Chip to help optimize CRISPR-Cas designs before embarking on expensive and time-consuming cell-based assays (Figure 2). It also helps rank, order, and prioritize Cas-gRNA candidates, thus derisking cell-based editing assays.

Figure 2. CRISPR-Complete can measure the binding between gRNA and a Cas protein, as well as the binding of Cas-gRNA complexes to PCR-amplified target DNA sequences or unamplified genomic DNA. In addition, CRISPR-Complete can measure the cleavage of target amplicons by Cas-gRNA complexes.

The workflow measures binding between candidate gRNAs and Cas proteins of interest, assesses binding interactions between Cas-gRNA complexes and target amplicons, demonstrates cleavage of Cas-gRNA complexes at target amplicons, and confirms binding of Cas-gRNA complexes to unamplified DNA sequences in a genomic context.

The technology offers insight into which gRNAs to select, which Cas protein to use, and which DNA sequence to target—a cluster of options cannot be deconvoluted using conventional CRISPR-Cas editing workflows or data from cell-based assays. Unlike other in vitro assays, cleavage can also be correlated with detailed gRNA or target DNA binding data and measured sensitively, without amplification and without the reagents or instruments required for optics-based methods.

Figure 3 shows both raw and processed data generated from a representative CRISPR-Complete workflow. This experiment measured the cleavage of a Cas-gRNA complex at target and nontarget amplicons. As various components of our reconstituted Cas-gRNA system are added stepwise, there is a real-time change in the electrical characteristics of the graphene chip (Figure 3A). This data is calibrated to a baseline and processed to generate an I response, which measures the change in current between the baseline and end state (Figure 3B). Triplicate reactions are run on distinct transistors to ensure the validity of the observed results.

A reduced I response is synonymous with amplicon cleavage in this experiment. As shown in Figure 3B, the RNP using gRNA #1 cleaves its target amplicon (purple box) and as expected, does not cleave a nontarget amplicon (orange box).

In the case of gRNA #1, Cas-gRNA complex formation, RNP cleavage at a target amplicon, and discrimination against cleavage at a nontarget amplicon were all validated. If cleavage of the target amplicon was not seen, the upstream steps of RNP formation and target amplicon binding to further optimize or redesign this Cas-gRNA complex could be further investigated.

The platform has been used to assist small and large biotechnology companies in rapidly identifying high-affinity gRNAs, determining target amplicon binding, and comparing cleavage activity across different Cas9 vendors. As demonstrated above, CRISPR-Complete provides biological insights into cleavage efficiency, the upstream process of Cas-gRNA RNP formation, and target DNA binding. These insights enable gRNA optimization for Cas-gRNA complex formation and confirmation of binding to and cleavage of target amplicon sequences.

Figure 3. CRISPR-Complete can detect CIRSPR-Cas function on target and notarget amplicons. (A) Step 1 is where we monitor Cas characteristics. Step 2 is where we introduce gRNA and the target sequence and monitor their direct interaction. Step 3 is where we begin the cleaving process and monitor that activity. (B) The I response was measured by subtracting the I response at step 33 from the I response at step 29 in panel A and plotted in the blue box. Another reaction, gRNA #1 + nontarget amplicon (red box), was tested in parallel and processed similarly.

 

Kiana Aran, PhD, is the scientific advisor and a board member of CRISPR QC. She is also an associate professor of medical diagnostics and therapeutics at the Keck Graduate Institute.

The post Leveraging Modern Electronics to Streamline CRISPR Workflows appeared first on GEN - Genetic Engineering and Biotechnology News.

]]>
https://www.genengnews.com/resources/leveraging-modern-electronics-to-streamline-crispr-workflows/feed/ 20
Biobank-Tailored LIMS to Track Precious Samples and Manage Data https://www.genengnews.com/resources/biobank-tailored-lims-to-track-precious-samples-and-manage-data/ https://www.genengnews.com/resources/biobank-tailored-lims-to-track-precious-samples-and-manage-data/#comments Thu, 06 Oct 2022 10:53:56 +0000 https://liebertgen.wpengine.com/?p=208353 Thermo Fisher Scientific describes how to select a LIMS that will apply Laboratory 4.0 principles and “future proof” biobank operations.

The post Biobank-Tailored LIMS to Track Precious Samples and Manage Data appeared first on GEN - Genetic Engineering and Biotechnology News.

]]>
By Javier Fraile

JavierFraile_Thermo
Javier Fraile, Digital Science Solutions Manager, Thermo
Fisher Scientific

Collecting specimens is an age-old pursuit, one that occupied no less a scientist than Charles Darwin. Back in 1831, he traveled across the world to collect samples of strange and rare species. Today, his unique sample collection is carefully housed in the Natural History Museum in London, England, along with its data and metadata in the form of meticulous logbooks. All this material demonstrates the importance of collecting, documenting, and storing living tissues. In Darwin’s case, these activities formed the basis of our modern knowledge of genetics and biological inheritance. Now these activities occupy biobanks.

Early biobanks were private sample collections built with a specific purpose. Modern biobanks, on the other hand, are large-scale operations that fuel countless biological and medical research studies. Small or large, every biobank holds untapped potential and unknown possibilities. As biobanks have evolved over the years, they have invested in better infrastructure and equipment to protect the valuable samples under their care. However, while focusing on sample upkeep, most biobanks have failed to invest in data management solutions. This has resulted in an overreliance on manual processes that segregate data records and pack them into silos. These tedious practices entail operational inefficiencies, quality issues, and escalating costs that prevent many biobanks from reaching their true potential.

Modern super-biobanks face unique challenges

Unlike the evolution of species described by Darwin, the evolution of the biobank has been fast-paced. The sector has exploded to meet the emerging demands of cutting-edge fields, such as personalized medicine, cell and gene therapy, proteomics, and genomics. This rapid advancement has seen some super-biobanks expand to contain millions of samples.

Each biorepository is unique, from its contents to its organizational structure. And unlike the solely private enterprises of the past, modern biorepositories include government-funded projects, nongovernment organizations, and commercial operations. We are even starting to see the emergence of virtual biobanks that provide an interface to search many interconnected repositories. Despite their unique points, most modern biobanks share common themes. For example, most modern biobanks are dynamic organizations that see samples flow in and out daily.

In addition to sample diversity, the associated data and metadata are also becoming increasingly complex. Data about matters such as informed consent, patient history, and intended use commonly need to be stored along with the samples, influencing how, when, and by whom the samples can be accessed and used. To safely store samples and meet quality and regulatory standards, the chain of custody must be robust, fully understood, and documented. This means comprehensive tracking of a sample’s end-to-end journey, from submission to storage and eventual shipping.

Biobanks face these challenges in their day-to-day operations, but they are not solely built for the research of today. Stored samples will likely be used in ways that we can’t anticipate; that is, they will be used in future research and evaluated with techniques that have yet to be developed. This means that biobanks must be future-proofed. Specifically, they must have the technology in place to create automated and robust systems that will be best placed to integrate with the technology of tomorrow.

From Laboratory 4.0 to Biobank 4.0

Many biobanks have already implemented the infrastructure needed to accurately submit, sort, store, and ship samples, as and when required. But while this physical investment has been made, investment in data management systems to support this infrastructure has been lacking.

Cutting-edge equipment used in biobanks, such as smart cryogenic storage systems, automated capping and decapping systems, and high-speed barcode scanners, are often designed to comply with Laboratory 4.0 principles, which call for next-generation laboratories to combine digitalization with automation. Laboratory 4.0-standard equipment promises fewer manual tasks and offers cost- and time-effective operations.

However, investing in Laboratory 4.0-standard equipment without adopting smart connectivity, automation, and machine learning capabilities restricts the return on investment. Put simply, biobanks that imbibe the Laboratory 4.0 principles—along with digitalized data management—will start utilizing the full capabilities of their technology today and have the potential to leverage big data in the future.

Lifting limitations through LIMS

Laboratory information management systems (LIMS) have revolutionized laboratories, providing a centralized, single source of truth for all data associated with samples, equipment, workflows, and instruments. LIMS break the data silos of the past to increase efficiencies and enable automated workflows. Through investment in these systems, many laboratories experience time savings, reduced costs, and improved quality standards.

The story is very different in the biobanking industry. Many have been slow to invest in LIMS, instead continuing to rely on manual processes to submit, track, and monitor samples. Despite the clear benefits offered by LIMS, uptake by biobanks has been slow—and understandably so. LIMS typically haven’t accommodated the specific capabilities required by this sector.

Standard LIMS require considerable investment, and when they don’t quite fit biobanking requirements, securing procurement funds and justifying running costs can become challenging. With standard LIMS, biobank teams often require extensive training, and when large elements of a LIMS go unused, the return on investment remains unrealized.

How to choose the best LIMS for your biobank

Responding to the growing demands of the fast-evolving biobanking industry, developers are now releasing biobank-specific LIMS. However, the available systems vary widely in their capability, usability, and future-proofing potential. When a biobank-specific LIMS is being chosen, several elements should be considered. The most important ones are discussed in the following subsections.

Automation: The foundation of Laboratory 4.0 is automation. Once it is in place, it supports the elimination of tedious, error-prone manual tasks. A LIMS should eliminate or automate as many manual tasks as possible so that sample entry, processing, retrieval, and shipping may proceed more quickly and reliably.

At a minimum, the LIMS should automatically generate unique codes and print barcodes, ensuring that samples are tracked throughout their journey into, through, and out of the biobank. This end-to-end tracking using automation provides biobanks with a digital chain of custody. By applying the unique codes, the LIMS records who added or removed a sample, as well as the action’s time and the location. This tracking can then extend into shipping and continue until the sample reaches its destination.

Integration with existing workflows: Point solutions developed for highly specialized functions in the biobank are sometimes incompatible with existing technology or software and cannot be integrated with the workflow. With connectivity at their core, LIMS tailored for biobanks ensure integration without compromising on specific functionality.

Not only can LIMS integrate with other data systems, such as hospital information management systems or electronic laboratory notebooks, but they also interface seamlessly with smart instruments, which typically include storage and monitoring devices. As a result, samples can be closely monitored, and incidents such as power outages or equipment breakdowns can be traced to the individual samples involved, thereby increasing quality standards.

Simplified, easy-to-use interface: It takes less time and money to train personnel on easy-to-use systems. When operators understand a system and see its value, new workflows are adopted faster.

Accordingly, biobanks should be choosing LIMS with simple user interfaces that include only relevant information. Dashboards should be configurable to provide an overview of the entire biobank repository. Having a real-time, top-level snapshot as well as the option to drill into details can help managers resolve sample capacity issues. Specifically, these capabilities help managers spot available space and review storage conditions.

Driving high quality and regulatory compliance: Tailored, fit-for-purpose LIMS help biobanks reach optimal quality standards. Parameters that directly influence sample viability, such as freeze-thaw cycles and time spent out of cold storage, can be tracked throughout the chain of custody, and they can enable the customization of automatic alerts. Having records of these critical quality parameters ensures that samples and sample handling practices comply with regulatory requirements. From the outset, LIMS should demonstrate good practice (GxP) guidelines. As regulatory standards change, LIMS should adapt to meet them.

Developed for the real world with an eye to the future: Before investing in LIMS, biobanks should ask to view a real-world demonstration, whether it is at another biobank or an integrated test site. LIMS providers should also have a pipeline of planned upgrades and future releases to respond to the changing regulatory landscape. Forward-thinking LIMS developers will have a detailed understanding of the industry and its challenges. Accordingly, they will have the ability to respond promptly as biobanking needs evolve.

Using LIMS to drive Biobank 4.0

With future medical research and biotherapeutic development relying on the samples contained within biobanks today, there is little room for errors and inefficiencies caused by manual tasks and siloed data management. Despite rapid growth, biobanks have yet to embrace data practices that maximize efficiencies and fully safeguard sample quality.

Biobank-tailored LIMS incorporate all the benefits associated with Laboratory 4.0 principles. By applying these principles, LIMS help create the next generation of biobanks through (1) digitalization (establishing a central repository for all data and permitting access to data-related insights); (2) automation (allowing scientists to concentrate on value-added activities instead of tedious manual tasks); and (3) connectivity (bringing together all data, equipment, infrastructure, samples, and people in one system).

By choosing easy-to-use and future-proofed LIMS that give an overview of a sample’s entire journey, biobanks will ensure that samples will remain protected and reach researchers quickly and easily. The improved cost and time efficiencies delivered through LIMS will enable biobanks to continue to grow, further expanding the usefulness of the highly valuable information they contain.

Javier Fraile is digital science solutions manager at Thermo Fisher Scientific.

The post Biobank-Tailored LIMS to Track Precious Samples and Manage Data appeared first on GEN - Genetic Engineering and Biotechnology News.

]]>
https://www.genengnews.com/resources/biobank-tailored-lims-to-track-precious-samples-and-manage-data/feed/ 20
Synthetic Biology Platform Unleashes the Power of Plants https://www.genengnews.com/resources/synthetic-biology-platform-unleashes-the-power-of-plants/ https://www.genengnews.com/resources/synthetic-biology-platform-unleashes-the-power-of-plants/#comments Thu, 06 Oct 2022 10:53:13 +0000 https://liebertgen.wpengine.com/?p=208354 Calyxt has developed technology for engineering plant metabolism and establishing plant-based chemistries for use in new materials and products.

The post Synthetic Biology Platform Unleashes the Power of Plants appeared first on GEN - Genetic Engineering and Biotechnology News.

]]>
By Travis Frey, PhD, and Bobby Williams, PhD

Plants are powerful. They have shaped the environments and atmosphere in which all organisms grow and evolve. From the earliest days of multicellular life, plants have synthesized the base nutrients for nearly all existence. They led the move from marine to terrestrial life, and they created and sustain the chemical ecology in which all life on Earth is embedded.

Besides providing the building blocks necessary for other organisms to develop and evolve, plants evolve as well. Indeed, plants have adapted to occupy almost all inhabitable environments. About 400,000 plant species are known, and more are being discovered every year.1

BiopTut_TravisFrey
Travis Frey, PhD, chief technology officer, Calyxt

To survive and thrive in diverse and often challenging environmental niches, plants have evolved a monumental array of chemistries. For example, there are chemistries that help plants coordinate their internal processes, as well as chemistries that help plants communicate with their neighbors. The chemical universe of plants is estimated to be in the millions of unique metabolites,2 which dwarfs the chemical spaces that have been established by animals and microbes. Furthermore, many compounds are unique to specific lineages of plants. In short, tapping the diverse power of plants means accessing the diversity of the whole plant kingdom.

Yet over time, as humans domesticated plants to produce more of our needs, we have focused our use of plants on a limited number of high-producing species. For example, about 40% of human-consumed calories come from just three plant species, every one of which comes from the same family of grasses.3 While this success was necessary for feeding an ever-growing human population on limited land resources and continues to be critical for society today, one cannot help but wonder what more could plants offer if we were able to grow all the various species in the world and harness the millions of natural compounds these naturally rich chemical engineers can produce?

Until just recently, this question was nearly impossible to answer because of limited land and seemingly unlimited plant species. Many of the rich chemistries that we have come to use in our daily lives were originally sourced from plants. For example, we have long used saponins in natural soaps. (More recently, we started developing saponin-based adjuvants to enhance immune responses to vaccines.) Salicin from willow bark once served as a pain medication. (Eventually, medicinal salicin was replaced by a related compound known as acetylsalicylic acid, or aspirin.) And an extract from the bark of the Pacific yew was used by Native Americans to treat various ailments. (Today, it is a natural source of Taxol, a powerful anticancer drug.)

Bobby Williams, PhD
Bobby Williams, PhD, product engineering director at Calyxt

Plants are expected to contain many other useful compounds. However, these compounds can be hard to identify. Even after these compounds have been found, they can remain underutilized due to paucity of pathway knowledge, limited natural sourcing, lack of suitable plant production, or the inability to chemically synthesize complex compounds. These challenges are being tackled by Calyxt, a “plant-based synthetic biology company” headquartered in Roseville, MN.

An evolving plant company

Founded in 2010, Calyxt began its synthetic biology journey by engineering the metabolic activity of plants through gene editing. This work reflected what was and remains the company’s driving mission: developing sustainable plant-based solutions that improve health and reduce waste. One of Calyxt’s earliest achievements was the development of a gene edited soybean plant that produces a soybean oil that is higher in oleic acid and lower in saturated fatty acids than commodity soybean oils. Calyxt’s soybean oil is also notable for having zero grams of trans fat per serving.

Finally, it has up to three times the fry life and a longer shelf life than commodity oils—features that enhance sustainability. In subsequent years, Calyxt has built a powerful engine for metabolic design in plants called the PlantSpring™ platform.

Intent on discovering and verifying product concepts and on bringing plant-derived products to market more quickly, Calyxt developed its Plant Cell Matrix™ (PCM™) technology. This technology, which augments the PlantSpring platform, leverages multiple cell types and sustains a transformable and rapidly growing plant organ culture. The culture incorporates structures commonly known as hairy roots and is capable of prototyping metabolic perturbations (Figure 1).

Figure 1. Calyxt’s proprietary Plant Cell Matrix™ (PCM™) technology is based on multicellular, autonomously growing, hairy roots of hemp in culture. PCM systems are suitable for engineering, propagating, and selecting biomolecules for production.

Given their transformation efficiency and productivity, PCM systems are amenable to the regulation of gene expression by various means, including transgene approaches, gene silencing, gene activation, and genome editing. Accordingly, PCM systems allow Calyxt to design, engineer, and discover plant pathways in months—work that ordinarily requires years of whole crop field testing. Calyxt has found that PCM systems are useful not only for testing genetic hypotheses, but also for producing many of the plant metabolites that are desirable for industries such as cosmetics, nutraceuticals, and pharmaceuticals.

Tapping plant potential

Plants are inherently related through evolution of their DNA from common ancestors. This relatedness also means that the chemistries that arise from DNA also share common precursors. As a result, if you can understand the metabolic pathways that exist in one plant and the underlying DNA of those pathways, you can harness that knowledge to produce those chemistries in a different plant beginning with those common precursors.

Of the approximately 400,000 known plant species, only about 800 have had their genomes sequenced.4 Few of the genes that direct the synthesis of plant metabolites are known. Nonetheless, the known plant genome universe is expanding thanks to basic research. At the same time, Calyxt is conducting strategic genome sequencing to elucidate high-value pathways.

The PlantSpring platform combines the tools of public and proprietary genome resources with artificial intelligence/machine learning (AI/ML) to build predictive models that illuminate pathway knowledge across the plant kingdom. Calyxt is addressing one of the known limitations of accessing the diversity of plant-produced biomolecules by leveraging the PlantSpring platform and developing an ecosystem that can make predictions to complete uncharacterized biosynthetic pathways.

The key to eliminating the difficulties of sourcing rare plant material is by advancing PCM systems for the rapid prototyping of metabolic engineering designs. There is a wide variation in the growth rate of PCM systems across plant species. Specifically, hemp produces among the fastest growing hairy roots measured to date (Figure 2).5–10

Calyxt_tutorial
Calyxt hemp-derived PCM™ systems (left side of graph) are among the fastest growing hairy roots as represented by the fewest days to double in size (blue: 1.4 days) and the fewest days to achieve a 20-fold increase in size (orange: 14 days).5–10

Beyond biomass accumulation, PCM systems also open the possibility of reproducing biosynthetic pathways from uncultivated and/or endangered plants. Hemp, for example, produces a rich set of precursor chemistries11 that can be modified to produce complex and species-specific secondary metabolites. This enables decoupling of the chemical diversity of plants from the geographical diversity of their ecological niches. In short, compounds from exotic plants could be produced even in locations where the plants are not native.

A final limitation to tapping the potential of plant-produced chemistry is the lack of suitable production systems at scale. Currently, most plant-based biomass production scalability comes as a secondary stream from field crops. Accessing biomolecules is often opportunistic based on the endogenous chemistry of the limited species in production agriculture. PCM design and engineering can enable bespoke biomolecule production, and PCM growth rates augur well for bioproduction scaleup.

Calyxt was able to engineer not only the DNA of the PCM, but also create a system capable of growing PCM systems in a bioreactor. Since bioreactors are enclosed controlled environments, they can be placed anywhere and can grow plants year-round, and they can be built vertically to optimize the growing potential of a limited land area. PCM systems that produce plant-derived compounds can be orders of magnitude more productive per unit land area than field or aeroponics-sourced material (Figure 3). Growing plants in these types of systems is not trivial, but Calyxt believes it has discovered a path to do this with the BioFactory™ production system.

Tutorial Calyxt
Figure 3. Estimates of root biomass production per unit area across several approaches from field-grown plants to PCM™ systems in the Calyxt BioFactory™.

Nature and value of plant chemistry

Plant chemistry systems, especially those responsible for secondary metabolism, are often modular. There are major families of chemistry derived from common precursors, but the individual compounds often reflect the specificity of the species and their environments.

The compounds in the monoterpene, sesquiterpene, and triterpene families are all derived from isoprene subunits. Compounds from any of the families can be modified to produce linear or cyclic skeletons that can be decorated with different functional groups to produce thousands of different compounds. For example, both sterols and saponins are subfamilies within triterpenes that have value in pharmaceutical/medicinal, cosmetics, and industrial applications. However, many triterpenes are either difficult to source or are produced by chemical synthesis using petroleum-based compounds.

By combining a knowledge of chemistry with deep pathway knowledge and engineered PCM systems, Calyxt is working to evolve society’s relationship with plants. Calyxt intends to access previously inaccessible plant chemistries, unlocking the richness therein and delivering countless boons to humanity.

Travis Frey, PhD, serves as chief technology officer and Bobby Williams, PhD, is product engineering director at Calyxt.

References

1. Christenhusz and Byng. The number of known plants species in the world and its
annual increase. Phytotaxa 2016; 261(3): 201–217.
2. Fang et al. Exploring the Diversity of Plant Metabolism. Trends Plant Sci. 2019;
24(1): 83–98.
3. Food and Agricultural Organization of the United Nations. Once neglected, these
traditional crops are our new rising stars. Published: October 2, 2018. Accessed:
September 12, 2022. https://www.fao.org/fao-stories/article/en/c/1154584/.
4. Sun et al. Twenty years of plant genome sequencing: achievements and challenges. Trends Plant Sci. 2022; 27(4): 391–401.
5. Barba-Espin et al. Ethephon-induced changes in antioxidants and phenolic compounds in anthocyanin-producing black carrot hairy root cultures. J. Exp. Bot. 2020;
71(22): 7030–7045.
6. Carlin et al. Effects of different culture media and conditions on biomass production of hairy root cultures in six Mexican cactus species. In Vitro Cell. Dev. Biol.-Plant.
2015; 51(3): 332–339.
7. Häkkinen et al. Molecular farming in tobacco hairy roots by triggering the secretion of a pharmaceutical antibody. Biotechnol. Bioeng. 2014; 111(2): 336–346.
8. Parkin-Parizi et al. Impact of different culture media on hairy roots growth of
Valeriana officinalis L. Acta Agric. Slov. 2014; 103(2): 299–305.
9. Thimmaraju. Bioreactor for cultivation of red beet hairy roots and in situ recovery
of primary and secondary metabolites. Eng. Life Sci. 2009; 9(3): 227–238.
10. Urbanska et al. The growth and saponin production of Platycodon grandiflorum
(Jacq.) A. DC. (Chinese bellflower) hairy roots cultures maintained in shake flasks
and mist bioreactor. Acta. Soc. Bot. Pol. 2014; 83(3): 229–237.
11. Jin et al. Secondary Metabolites Profiled in Cannabis Inflorescences, Leaves,
Stem Barks, and Roots for Medicinal Purposes. Sci. Rep. 2020; 10: 3309.

The post Synthetic Biology Platform Unleashes the Power of Plants appeared first on GEN - Genetic Engineering and Biotechnology News.

]]>
https://www.genengnews.com/resources/synthetic-biology-platform-unleashes-the-power-of-plants/feed/ 21
An Integrative Approach to Probing Transient Protein Structures in Cell Extracts https://www.genengnews.com/resources/an-integrative-approach-to-probing-transient-protein-structures-in-cell-extracts/ https://www.genengnews.com/resources/an-integrative-approach-to-probing-transient-protein-structures-in-cell-extracts/#comments Tue, 06 Sep 2022 10:51:18 +0000 https://liebertgen.wpengine.com/?p=205868 By combining mass spectrometry, electron microscopy, and other techniques, researchers resolve the quaternary structure of the pyruvate dehydrogenase complex.

The post An Integrative Approach to Probing Transient Protein Structures in Cell Extracts appeared first on GEN - Genetic Engineering and Biotechnology News.

]]>
By Panagiotis L. Kastritis, PhD

To function correctly, cellular proteins depend not only on their complex tertiary and quaternary structures, but also on their proximity to and interactions with other molecules participating in the same metabolic or enzymatic pathway. Characterizing such topography and interactions is vital to a full understanding of cell function and dysfunction.

But visualizing and characterizing native protein-protein interactions remains a challenge, not least because of the complexity and size of protein structures within the cell. To date, most insights into protein function have come from analytical techniques that destroy higher order protein structure and disrupt the spatial arrangements that are vital for effective function.

Fortunately, a new era for cell and structural biology has dawned. Novel sample preparation techniques coupled with the advent of cryogenic electron microscopy (cryo-EM) and advances in complementary techniques, such as mass spectrometry (MS), mean it is now possible to visualize and interrogate large protein complexes in a near-native state. Already, technological and computational advances have produced an explosion of data on protein structure and function.

New integrative methods are enabling a holistic approach to structural biology. Consequently, transient scaffolds and subunits that used to be undetectable can now be revealed, improving our knowledge of the cell’s ultrastructure and offering new avenues for drug discovery.

A new perspective on protein architecture

Cell architecture and protein stoichiometry are vital for correct cell functioning, Yet the mechanical and chemical methods commonly used to isolate cellular proteins disrupt a myriad of finely controlled interactions between protein subunits and complexes. Research to explore catalytic pathways is often reductionist—identifying individual constituents before adding purified components together, mostly after heterologous expression, to test and monitor interactions.

A novel, more holistic approach provides a new perspective. It uses cell extracts to keep protein complexes intact and in natural proximity to other proteins, allowing scientists to study higher order protein architecture and organization. The extracts can be investigated using a range of cutting-edge visualization, analytical, and functional assays, many of which allow proteins to be studied in a native state.

Data generated using innovative techniques, such as cryo-EM to visualize proteins and crosslinking MS (XL-MS) to identify and characterize them, is complemented by information from functional assays and advanced computational biology to reveal previously unseen details of intricate protein-protein interactions within the cell. This integrated approach, further fueled by technological and digital automation, is pushing forward the frontiers of cell biology and classical structural biology.

Novel structural and nonstructural elements revealed

The benefits of this integrative structural biology approach can be illustrated by the pyruvate dehydrogenase complex (PDHc), which is a giant, 10-megadalton enzymatic assembly, ubiquitous in eukaryotic cells, that converts pyruvate to acetyl coenzyme A. Multiple PDHc components have been characterized in isolation and localized to the mitochondrial inner membrane–matrix interface, but the complex’s quaternary structure has remained elusive due to its sheer size, heterogeneity, and plasticity.
Paradoxically, previous studies have identified structural components overlaying the pyruvate binding site, raising the question of how the substrate enters and effectively binds at the site.

Recently, investigators have taken a more holistic approach to better understand the structural organization of PDHc (Figure 1). They enriched cell extracts from a thermophilic fungus and employed a range of analytical assays, including MS and cryo-EM, to reveal, for the first time, the active structure of the ubiquitous PDHc.1

Figure 1. Diverse biochemical and biophysical methods were combined to probe a native cell extract of the thermophilic fungus Chaetomium thermophilium and investigate the structural organization of a-keto acid dehydrogenase complexes. This integrative structural biology approach enabled researchers to elucidate the spatial relationships between the protein components in the dynamic assembly that is the pyruvate dehydrogenase complex (PDHc). They also, for the first time, identified enzyme clusters that form a transient catalytic nanocompartment that is known as the “pyruvate dehydrogenase factory.” (Cell Rep. 2021; 34: 108727.)

Large protein complexes, isolated from cell lysates using size-exclusion chromatography (SEC), were visualized at the atomic level using cryo-EM and their composition determined using MS techniques. Chemical XL-MS, which uses chemicals to stabilize protein-protein interactions ahead of MS analysis, helped pinpoint interactions in the protein assemblies.

Linking the resulting MS data to molecular signatures identified in cryo-EM micrographs has revealed a multitude of structures, including the fatty acid synthase metabolon, double- and single-membrane structures, liposomes with encapsulated biomolecules, and other higher order complexes.2

This integrated approach has provided new insights into the three-dimensional structure of the PDHc binding site and proved for the first time that it is accessible to the pyruvate substrate. This approach also unveiled an asymmetric protein configuration directly involved in pyruvate oxidation.1

The big reveal came when a giant nanocompartment was visualized for the first time—a transient catalytic chamber called the “pyruvate dehydrogenase factory.” It is likely to be the first of many similar structures identified using these methods (Figure 2).

Figure 2. Asymmetric reconstruction of the full pyruvate dehydrogenase complex (PDHc) from native cell extracts. Left panel: Previous studies showed that PDHc has a component that is stably attached to the E2p core, covering the binding site of the substrate. Right panel: Application of the integrative methodology discussed in this article has provided a detailed model of the PDH factory nanocompartment. (Cell Rep. 2021; 34: 108727.)

Changing the interface for cell and structural biology

Technologies, such as MS and cryo-EM, continue to advance at pace, together with functional and biophysical assays. Added to that, automation and machine learning tools have increased throughput and efficiency and have led to an explosion in datasets that allow investigators to take a much broader view of intricate protein interactions. Generating large datasets makes it possible to mine data distributions rather than single data points. Moreover, large datasets provide investigators with new opportunities to interrogate cellular processes and identify points for intervention.

Additionally, machine learning–inspired algorithms are enhancing image processing workflows to accelerate analyses of thousands of protein interactions.3 This high-resolution, high-throughput approach is revealing distinct structural signatures. These signatures can be correlated with proteomic data and cryo-EM maps to characterize previously unidentified protein communities at high resolution.

It is this integrated approach that is providing new insights into molecular organization at near atomic scale and revealing novel protein-protein interactions that are vital for correct cell function.4 Undoubtedly, these insights will take drug discovery beyond an enzyme’s active site to other specific points of protein interaction.

New horizons for drug discovery

There is still a great deal to learn about the cell’s proteome and its exact architecture. Bringing together a combination of biological and biophysical assays into an integrative model, using cell extracts and purified molecules, provides enormous potential for solving the most challenging questions of cell structure and function.

The integrative approach offers a bridge to combine data on purified proteins and subunits with visual and analytical data on larger complexes in their native state, at an unprecedented resolution.

For the first time, transient protein structures that are vital for enzyme activity can be visualized and investigated. And automation and machine learning are transforming the scale and rate at which new insights can be acquired, as witnessed recently with the AlphaFold and RoseTTAFold projects for the prediction of three-dimensional structures of proteins and their interactions.

All this is providing inspiration for new avenues of enquiry. But to push the technologies to the limits, investigators must work together and develop a coherent model that can be applied across laboratories and used by investigators in different disciplines.

Within a decade, the pieces of the cellular puzzle should be in place. A full inventory of cellular protein structures is in sight, and integrated data on how the cell functions and is organized is on the horizon. Critically, the wide implementation of an integrated analytical approach, using state-of-the-art and emerging technologies, is necessary to underpin this explosion in knowledge and accelerate and expand the scope of drug discovery in years to come.

References
1. Kyrilis FL, Semchonok DA, Skalidis I, et al. Integrative structure of a 10-megadalton eukaryotic pyruvate dehydrogenase complex from native cell extracts. Cell Rep. 2021; 34: 108727. DOI: 10.1016/j.celrep.2021.108727.
2. Skalidis I, Tüting C, Kastritis PL. Unstructured regions of large enzymatic complexes control the availability of metabolites with signaling functions. Cell Commun. Signal. 2020; 18(1): 136. DOI: 10.1186/s12964-020-00631-9
3.  Kyrilis FL, Belapure J, Kastritis PL. Detecting Protein Communities in Native Cell Extracts by Machine Learning: A Structural Biologist’s Perspective. Front. Mol. Biosci. 2021; 8: 660542. DOI: 10.3389/fmolb.2021.660542.
4. Tüting C, Kyrilis FL, Müller J, et al. Cryo-EM snapshots of a native lysate provide structural insights into a metabolon-embedded transacetylase reaction. Nat. Commun. 2021; 12: 6933. DOI: 10.1038/s41467-021-27287-4.

Panagiotis L. Kastritis, PhD (panagiotis.kastritis@bct.uni-halle.de), is junior professor of cryo-electron microscopy and computational structural biology at the Interdisciplinary Research Center HALOmem, Charles Tanford Protein Center Institute of Biochemistry and Biotechnology, Martin Luther University Halle-Wittenberg, Germany.

The post An Integrative Approach to Probing Transient Protein Structures in Cell Extracts appeared first on GEN - Genetic Engineering and Biotechnology News.

]]>
https://www.genengnews.com/resources/an-integrative-approach-to-probing-transient-protein-structures-in-cell-extracts/feed/ 21