Build, Buy or Both

Over the years, we’ve been involved in a number of LIMS/ELN selection projects. In this new series of blog posts, we’ll take a look at some of the lessons learned from those projects.


There are a variety of circumstances that can drive the decision to build or buy a system. Here are a few questions that you can use to help you better understand where you might fit on the spectrum.

Build

  • Do we have the internal resources, processes, experience and technology to build the application ourselves?
  • Can we outsource this effort? Can we do this with the proper oversight necessary to see this to its conclusion?
  • What is the opportunity cost involved in dedicating internal resources to this effort? If I’ve tied up my informatics department in these software engineering tasks, will it prevent them from addressing these higher value needs that my scientific groups have?
  • What are the ongoing costs going to be to maintain this system?
  • Could we open source the parts of the system that aren’t proprietary? Would that alleviate some of the maintenance burden on the staff? Is there a community of practice that this would benefit and that would be able to share some of the costs? Organizations like the Pistoia Alliance which focus on implementing pre-competitive technologies and standards might be a way forward.

Buy

  • Does the solution address most of our business processes? If not, can multiple solutions be integrated into a seamless whole?
    • What is the cost of integration?
    • What will the maintenance cost be for each of these integrations? Each time a vendor updates an application, the integrations between that application and other applications will have to be re-tested (or revalidated in the case of validated systems). This the fewer the vendors you select, the lower the costs.
    • Does the vendor provide validation services?
  • What is the true cost of ownership?
    • Yearly Licensing fees. Is it licensed per module? Is it licensed per module/per seat? Will you end up paying for extra seats for a given module that your organization never uses?
    • Implementation/configuration costs
    • Integration costs
    • Support costs (both internal and external)
    • On-Prem vs Cloud-based Hosting costs. Does the vendor provide a discount for cloud-based hosting?

Both

In addition, to each of the previous options, you may find it necessary to buy multiple solutions, and integrate them together. You may also have internally developed applications that require integration.

  • Do you have experienced programming staff to support this?
  • Do the systems you selected have publicly available APIs (Application Programming Interfaces) that you can write to? (see Frequently Asked Questions for more details).
  • How will this affect our upgrade costs if each integration has to be re-tested whenever the vendor rolls out an upgrade?

Need help getting started with your ELN or LIMS project? Contact us

Advertisements
Posted in Informatics, Bioinformatics, Drug Discovery | Tagged , , , | Leave a comment

The Three-Legged Stool

Over the years, we’ve been involved in a number of LIMS/ELN selection projects. In this new series of blog posts, we’ll take a look at some of the lessons learned from those projects.

The Three Legged Stool

It’s a commonly held misconception that ELN/LIMS implementations are a wholly technological solution to a scientific problem. The reality is that every successful informatics implementation is a three-legged stool consisting of people, process and technology. Each leg plays an essential role in a successful outcome of the project, and eliminating any one leg can have disastrous consequences.

People
Regardless of the size of the organisation, most life science companies have some form of governance structure in place to insure that strategic decision makers have a view of what’s happening within the company, identify challenges and resources, and set direction.

When selecting an eLN/LIMS system, the governance structures within the company help set the priorities, drive the agenda for the effort, and that ensure that key decision makers and internal subject matter experts help advance the selection process. Buy-in at all levels of the company is essential, and alignment with business goals and priorities ensure that the teams use the same filters when reviewing vendors and their solutions.

At a minimum, you’ll want the following types of people involved:

Senior Leadership – these leaders will form the steering committee. Their responsibility is to identify and recruit key opinion leaders in the organization, set the tone, vision and agenda for the efforts, identify strategic priorities, and set budgets.

Key Opinion Leaders/Subject Matter Experts – Their role is to help explain your scientific processes, the tools you use, the data you collect, and the analyses you perform. This information will drive the requirements gathering process. They’ll also help evaluate each of the solutions. Look for people who are typically early adopters of new technologies, who have hands-on experience with lab processes and whose opinions hold sway within the organization.

Research Informatics/IT – Their role will be to evaluate the solution on its technical merits, determine how the solution fits with the existing infrastructure, and to assess the associated costs for each solution.

Process
At a certain level, all drug discovery, diagnostic and medical device companies follow the same industry-standard, stage-gated approaches approved by their industries. But the devil is in the details, and key to a successful systems implementation is to have a common understanding of the process that your company follows to get a product out the door. It’s this process that you’re looking to support with your new system. You’ll want to understand the answers to the following questions:

What are the steps in the process?
Which steps in the process generate data?
What instruments and software are involved?
What types of data files are generated? (Excel, PDF, XML, JSON, proprietary file types?)
How big are the data files?
Are the instrument controllers networked together? Do they automatically deposit files on a shared drive? How frequently is the drive backed up?
Who is responsible for that data? For generating it, for verifying it, for recording it?
When something goes awry, how do you track down the source of the problem? By and large, variations in experimental data tend to come from two sources: operator variability and material variability. Are you collecting the information necessary to track down the source of experimental variation?

One of the consequences of having a rapidly-moving, science-driven company, is that the velocity of change means that scientific groups are often operating in relative isolation, unaware of the processes upstream and downstream of them and the impacts on eachother.
This can result in handoffs between organizations that require additional work. For example, you might be outsourcing part of your process to a CRO, and the data you get back must be “munged” in order to get it in the right format for the software application that you’re using to analyze the results. That data munging process can be time-consuming and error prone, and wherever possible, we want to try and eliminate these types of problems by bringing them to the surface and creating organizational awareness, but also looking to vendors for possible solutions to those problems.

Technology
The technologies that drive your business are constantly changing. And having a broad view of the players in the landscape can make all the difference in your selection process.

Here are some of the more recent innovations that we’re seeing:

Cloud-based Solutions
Although vendors have always had solutions that could be deployed in your own data center, over the past 7 years we’ve seen the movement towards the cloud. This makes it easier and less expensive for startup companies to manage their data and operations without large capital expenditures in infrastructure. It also means that geographically-dispersed organizations can work more effectively together. And with an increasing reliance on partnerships, cloud-based solutions can make it possible for partners to share data in a secure fashion.

Machine Learning
Machine Learning and Artificial Intelligence have been hot buzzwords over the past few years, but until recently they’ve been a solution in search of a problem. Lately, that’s changed, with the emergence of Machine Learning applied to screening data. It allows your scientists to discover new trends from very diverse data sets. You can examine screening data for millions of molecules and discover unseen correlations between structural or sequence, and changes in activity in QSAR (Qualitative Structure Activity Relationship) data and BSAR (Biological Sequence Activity Relationship) data.

Collaboration Tools
Science is by nature a collaborative endeavour, and vendors are building collaborative capabilities into their systems. These capabilities come in different forms. Collaborative image annotations allow researchers to ask for help from colleagues, and direct their attention to specific regions of interest. Think of it as Google Docs for images. Event feeds and dashboards help teams keep up-to-date with projects, and communicate more effectively.

IOT in the Lab
Industry standards organizations like the SiLA (Standards in Lab Automation) have been working in on standards to allow instruments and robots to communicate with LIMS software using standard Internet-of-Things (IoT) protocols.


Need help getting started with your ELN or LIMS project? Contact us

Posted in Bioinformatics, Informatics | Tagged , , , | Leave a comment

Requirements Are Required

Over the years, we’ve been involved in a number of LIMS/ELN selection projects. In this new series of blog posts, we’ll take a look at some of the lessons learned from those projects.

Requirements Are Required

While today’s ELN/LIMS solutions are highly configurable, there is no one-size fits all solution. Requirements analysis will help insure that your selection process identifies technologies that can best support your organization. They help you make data-driven decisions, and minimize bias in the selection process.

Every business is unique. On the surface this is counter-intuitive. For example, one might expect that all small-molecule companies would follow the same process to arrive at the same destination. But this isn’t the case. Even between small molecule companies, there a differences: rational drug discovery vs fragment-based design, knockdown studies vs knockout models, etc.

By reviewing your process with your teams, requirements come to the surface that might easily have been overlooked. The data generated by each step is documented, making it easier to identify software capable of parsing and loading the data.

Priorities for each requirement are identified. This makes it possible to generate weighted scores when you assess the vendors, and tells you where to focus your resources.

Requirements for the system fall largely into two categories: scientific and technical. Here are a few examples.

Scientific Requirements

Data/Instrument – The system should be able to import instrument data from all of the instruments that you’re currently using. The vendor should have some relationship with instrument vendors to insure that the drivers required to read these data files are supported even as the instrument software continues to change.

Integration – The system should be as self-contained as possible, or should present a minimal number of integration points. Imagine that your system is a collection of links in a chain. Each time one or more links changes, you need to validate the system again.

Data Verification & Signoff – The system should support the ability of scientists to verify the data, and electronically sign the dataset.

Analysis – The system should support the data analysis work that needs to be done

Technical Requirements

Cloud Requirements – Does the system need to be on-premise, or should it be in the cloud?

Backup Requirements – How frequently should the system be backed up? How will the backups be tested?

Disaster Recovery Requirements – In the event of a system failure, what plans should the vendor have in place to insure that you can get up and running again quickly?

Data Transfer Requirements – How much data will be transferred to the system? For example, if you need to transfer large imaging data sets to the system, then you’ll need to look at bandwidth requirements.

Performance Requirements – The system should always feel performant to the user. No lags in responsiveness at certain times of the day when all of the labs are inputting data.

Single Tenancy vs Multi-tenancy – A single-tenancy system means that your instance of the application is the only one running on the hardware. You’re not sharing it with anyone else, and therefore not potentially impeded by other tenants.


Need Help Getting Started with your ELN or LIMS project? Contact us

Posted in Informatics, Bioinformatics, Drug Discovery | Tagged , , , | Leave a comment

Using Material Design tools

In a previous blog post I described how material design can be used to create user interfaces. Material design is has been described as a language or a system for describing and creating user interfaces. In the video below you can see a number of Google employees attempting to describe material design. One of the odd aspects of material design is that it’s difficult for even its developers to accurately define, in part because it is more of a visual language than it is a written language and the language itself can be translated into specific implementations for different platforms like Android, iOS and web.

At this year’s Google IO developer conference, Google announced the release of Material Design 2.0. This release included new Material Design Components (MDC) for Android, IOS, Flutter and the Web. And although these toolkits are still a work in progress, you can clearly see how Google plans on making it easy for developers to create user interfaces that are consistent across all platforms and are intuitive for users.

With the release of Material Design 2.0, Google also included a number of tools to make it easier for developers to add material design components to their projects.

Material Sketch Plugin

The Sketch application is one of the premier applications for designing mockups for user interfaces. At $99/year, it won’t break the bank, and will provide you with an easy way to create professional looking user interface designs that you can use as part of your prototyping exercises in Design Sprints.

Google released the Material Design Sketch plugin which provides designers with material user-interface widgets that they can use in their designs.

Material Gallery

The Material Gallery web page provides an easy way for designers to share their designs with the rest of the team. Users and developers can comment on the designs just as you would comment on a Google Doc or Google Sheet. This is a great way for your team to firm up a prototype before starting the coding process.

Material Theme Editor

Material Design is intended to be flexible and to allow design and development teams to customise the look and feel. The Material Theme Editor makes this easy to do; however, there is one caveat with this. Currently web components are not themeable. There is a specification making the rounds to address this, but it’s still very much a work-in-progress. That said, the Theme Editor makes it possible for you to style components to suit your needs. You can create a style guide for research applications in your organization which is easy to follow by any developer. The Theme Editor, available here, gives you a way to create shareable themes. These themes can incorporate colors, typography and icon sets.

Applying Material Design In Research Components

One of the most common questions that we see is “how do I make my component more material-like”. This question occurs most frequently in cases where you’re building a bespoke bioinformatics component which may have no analogue in the material world. For example, let’s suppose that you want to create a component to display a biological network. You might start by taking a look at the Cytoscape component found in biojs.

The key successfully implementing a material version of this is to return to first principles used to design material components and ask yourself the following questions:

Does the component allow me to style text for nodes? Typography (fonts, font weights, font size, text spacing, font colour) all play a role in readability, but also help us understand the relationship between higher-level objects and lower-level objects. Typically a higher level object, uses a larger font; a lower-level object a smaller font and perhaps a lighter font colour.

Can I progressively hide detail and reveal it as needed? For example, you might start by displaying a variety of nodes that represent disease processes in cancer. You might indicate that the nodes are expandable using an appropriate icon on the node. When the user double-taps on a disease process node, the node should expand to display a network of interacting proteins for that process. This process of progressively revealing more detail makes it easier to tell a story, and diminishes the amount of cognitive load that a user has when confronted by a “hairball network” (yes, that’s a thing).

Does the application need to be responsive? In web parlance, a “responsive” app is an application that adjusts its layout in response to changes in the display of the application. For example, you might have an application that renders the display one way for a mobile application and another way for a desktop application. In Web 1.0 world, companies often created “m-dot” sites (the name comes from the habit of creating a site specifically for mobile traffic called “m.mysite.com” and serving an entirely different codebase). The flaw in this strategy is that you have to maintain a separate codebase for mobile and desktop. In an enterprise setting, bandwidth is less of a concern. That said, many pharmaceutical companies are making use of tablets and mobile devices in a lab setting, so having responsive web applications is becoming a more important aspect of the work.

In our case, we want to want to have an application that can be displayed on both mobile and desktop screens. To do this, we can use a card-based layout for mobile devices, and a network-based layout for tablet and desktop displays. We use a progressive reveal approach to the design so that just enough information is displayed. We use a common model for both the network view and the card-based view.

To learn more about how we can help you with your informatics project, contact us.

Posted in Bioinformatics, Drug Discovery, Informatics | Tagged , , | Leave a comment

Pipeline Stories: The Importance of Storytelling In Research

So much of the process of drug discovery revolves around answering questions: “what does this target do with respect to the indication”, “how (in)tractable is this target”, “how good are the models for this indication”. With all of the focus on results, we sometimes forget how important the art of storytelling is in the furtherance of science.

Many companies hold a science meeting either once a week, or once a month. If gives scientists a chance to practice their story-telling skills — translating ideas and results into terms that are more easily accessible and understood. And groups like BioToasters provide scientists with similar opportunities outside of their business.

I was reminded how important storytelling was at a recent meeting of the San Diego Entrepreneur’s Exchange when the Jonathan Lim, CEO of Ignyta Pharmaceuticals, gave an insightful and inspirational talk on the power of perseverance in the face of adversity. His company and an East Coast competitor had been in a race to be the first to bring to market a drug that targeted TRKA a protein which plays a key role in a number of types of cancer.

During the presentation, he shared some images which showed remarkable shrinkage of colorectal tumors over the course of a month.

In passing, he mentioned that one of their challenges had been to find enough patients to be able to carry out the clinical trial. In pancreatic cancer, I had seen similar challenges, with barely 4% enrollment of eligible patients in most trials.

All through the talk though something was bothering me. I’ve been interested in perineural invasion for many years. It’s a common co-morbidity found in 90% or more of pancreatic cancer patients. Tumour cells invade nearby nerve tissue causing extreme pain. It’s a difficult indication to treat, the standard of care is often not durable, and because it typically falls under the rubric of palliative care it’s often given short-shrift when it comes to research funding.

But even the briefest of scans of review articles on the subject will show that in a number of papers, perineural invasion has been shown to play a role in the progression of pancreatic cancer. As one paper declared in its title, “Perineural Invasion: More Than Pain”.

What had been puzzling me was this — TRKA plays a large role in perineural invasion. And some studies in mouse models of pancreatic cancer had demonstrated that inhibiting TRKA resulted in a reduction in tumor size and a reduction in pain. [1–7]

So given the role that TRKA played, and given the fact that 55,000 patients were going to be diagnosed with pancreatic cancer this year (of which at least 90% would have this condition); why then did they not approach the pancreatic cancer patient community and recruit from there?

I put the question to him after the talk and was surprised at the answer. The problem was that in the minds of investors, treating perineural invasion was not as compelling a story as treating colon cancer. By making cancer the focus of their trials they were able to find the investment they needed. The results of their trials (and the concomitant story) attracted the attention of Roche, and that was enough for a buy-out. And the resulting drug, Entrectinib is currently going through its Stage II/III trials for extra-cranial solid tumours. So I can’t really argue with success, and I’m hopeful that their work on Entrectinib and a Smoothened-inhibitor will bear fruit for them and may eventually help pancreatic cancer patients. A quick check of ClinicalTrials.gov confirmed that Ignyta was currently conducting a Phase II basket trial that included pancreatic cancer patients.

So a story can have a profound effect on a company’s financial future, but often the stories that we tell during drug discovery play just as important a role. Stories can inform, recruit and inspire colleagues, they can point the way towards unmet medical need, and more often than not, they can spur further investigation by uncovering unasked questions.

At Aspen, we’re always interested in these stories. We’re interested in the how’s and why’s of target selection, and everything from the point at which that decision is made, to the point at which a new drug-like molecule is brought to the clinic. That is, in part, why we’ve built Pipeline our pharmaceutical portfolio and project management application: to give you, our customers, better tools for telling stories.

Contact us to find out how you can participate in our early access program.

References

1. Hefti FF, Rosenthal A, Walicke PA, Wyatt S, Vergara G, Shelton DL, et al. Novel class of pain drugs based on antagonism of NGF. Trends Pharmacol Sci. 2006;27: 85–91.

2. Watson JJ, Allen SJ, Dawbarn D. Targeting Nerve Growth Factor in Pain. BioDrugs. 2008;22: 349–359.

3. Covaceuszach S, Cassetta A, Konarev PV, Gonfloni S, Rudolph R, Svergun DI, et al. Dissecting NGF interactions with TrkA and p75 receptors by structural and functional studies of an anti-NGF neutralizing antibody. J Mol Biol. 2008;381: 881–896.

4. Cattaneo A, Capsoni S, Margotti E, Righi M, Kontsekova E, Pavlik P, et al. Functional blockade of tyrosine kinase A in the rat basal forebrain by a novel antagonistic anti-receptor monoclonal antibody. J Neurosci. 1999;19: 9687–9697.

5. Degrassi A, Russo M, Nanni C, Patton V, Alzani R, Giusti AM, et al. Efficacy of PHA-848125, a cyclin-dependent kinase inhibitor, on the K-Ras(G12D)LA2 lung adenocarcinoma transgenic mouse model: evaluation by multimodality imaging. Mol Cancer Ther. 2010;9: 673–681.

6. Brasca MG, Amboldi N, Ballinari D, Cameron A, Casale E, Cervi G, et al. Identification of N,1,4,4-tetramethyl-8-{[4-(4-methylpiperazin-1-yl)phenyl]amino}-4,5-dihydro-1H-pyrazolo[4,3-h]quinazoline-3-carboxamide (PHA-848125), a potent, orally available cyclin dependent kinase inhibitor. J Med Chem. 2009;52: 5152–5163.

7. Ghilardi JR, Freeman KT, Jimenez-Andrade JM, Mantyh WG, Bloom AP, Bouhana KS, et al. Sustained blockade of neurotrophin receptors TrkA, TrkB and TrkC reduces non-malignant skeletal pain but not the maintenance of sensory and sympathetic nerve fibers. Bone. 2011;48: 389–398.

Posted in Drug Discovery, Informatics, pancreatic cancer, Project Management | Tagged , , | Leave a comment

Pipeline Stories: Lessons Learned From Lessons Learned

In the fast-paced world of drug discovery, implementing a Lessons Learned program can be challenging at the best of times. At a recent Portfolio and Project Manager’s Meetup, I had the chance to talk with some folks from a local biotech company that were in the process of implementing just such a program.

There are a number of common refrains that Project Managers typically hear when attempting to implement a Lessons Learned program:

  • We’re done with that, let’s move on. Nine out of every 10 programs fail, so when someone suggests that you stop and take the time to look at the gory details behind a failure (that in truth, they don’t want to be associated with), it can be a hard sell. But the problem is the misperception that Lessons Learned is all about doing a root cause analysis of a failure, instead of making sure that we record what we learned (good or bad) from this project in a manner that we can easily re-use in future projects. That really cool synthetic route that improves your binding affinity which was recorded in an ELN that no one ever looked at again. That trick to getting the cell-based assay to work properly, just walked out the door in the head of the lab tech who’s now working for your competitor. Those scenarios happen every day, and a Lessons Learned programme can help create an institutional memory in ways that simply recording it in an ELN can’t.
  • My plate is full, do we really need to do this now? At the end of a project, people are moving on to the next big thing. They’re getting up to speed on their new projects, or the next set of tasks they’ve been assigned, and taking several days to a week to do a Lessons Learned exercise can seem like something that has little direct benefit to them at a time when they need to focus on their next assignment.

So, how do we solve these challenges? How can we implement a programme in ways that benefit both the organisation as a whole and individual participants? Keeping in mind that not every problem has a technical solution, in this particular case it’s often better to think of any solution as a three-legged stool consisting of people, process and technology. So let’s take a look at each leg in-turn to see how we might implement a Lessons Learned programme.

Process
The standard approach for performing a Lessons Learned Exercise is to wait until the end of the project, bringing the team together for several days to a week, go over each of the stages of the project, review the tasks, risks and issues, and put together a document that accurately captures all of that. But the problem with this approach is that often the lessons that you learned years ago during the Target Validation or Lead Optimisation stages of the project are long forgotten by the time you decide to perform the Lessons Learned Exercise. A better approach is to perform smaller, more tightly focused exercises at the completion of each stage. Since it’s shorter in duration, you’re often not faced with the same resistance that you might otherwise encounter.

Break the meetings into smaller more focused sessions with team members from the same group. Have a biology meeting, a chemistry meeting, etc.

Borrowing from the PMIs guidelines for lessons learned (and the CDC’s PMG guidelines as well) here are a few general questions that facilitators can use to get things kicked off, and some more focused industry-specific questions as well:

  • What went right?
  • What went wrong?
  • What needs to be improved?
  • Were there any issues that occurred during this phase (eg. Target Validation), how did we solve them, what did we learn, how can we apply this learning to other projects in the future?
  • Were there any risks that we identified and dealt with, or failed to deal with properly?
  • Were there any handoffs between groups where data had to “munged” prior to handoff? What can we do to make that smoother?
  • Were there any issues with vendors or instruments that affected or might have affected our results? If so, how did we compensate for this? What should we do about this in the future?
  • Were there any automation-related issues and how were they handled?

People
To counter the negative bias that can be associated with “raking up the past”, it’s important to emphasize that the goal is to record the successes as well as the failure. How can we make sure that the next time that the same situation arises, that your colleague will be able to repeat the success, or avoid the failure?

In some companies, Lessons Learned are part of an overall program of continuous improvement with a built-in reward mechanism. Many years ago while working in Royal Dutch Shell’s Knowledge Management Systems group, one of our tasks was to implement a system that tracked how often a lesson was re-used, and to reward the person who created the lesson. The system tracked cost-savings for the company as well. Each time the lesson was applied, a cost-savings was realised, and the employee was rewarded. This is one of the few applications I’ve worked on where the value of the application was made obvious by the application itself.

That approach worked well for a large multinational company, but might not work well for a single site. My point is though, it’s important to understand and formulate culturally appropriate reward mechanisms whether it’s information that goes into your personnel file as part of your annual review, or points which can be redeemed at the company store. That reward mechanism helps ensure that employees are constantly looking for areas of improvement within the company.

Technology
How does Pipeline help you implement a Lessons Learned program? To begin with, we give you tools for capturing the lesson.

You can create a lesson at any point in the discovery process and add them to your library. The library gives you a cross-cutting view of the lessons learned from all of your projects. You can add documents to your lesson, journal articles and even PubMed searches, to help insure that your lesson has all the supporting information it needs.

To insure that your lesson is applied in other projects, we use the metadata around the lesson to help bring that lesson to the surface just when you need it. You can think of it as Match.com for lessons learned.

For example, suppose your lesson had to do with a neat trick for improving binding affinity for targets with a very specific domain. The metadata for the lesson can include domain information. Whenever you tackle a new project with the same domain, the lesson appears as a suggestion. To view the lessons that match the characteristics of your project simply open your project, and select Lessons Learned from the menu.

This helps you realise the true value of your lesson by applying it in multiple situations. It means that even with personnel changes, the institutional knowledge of your organisation doesn’t walk out the door. It’s used and re-used by the right people at the right time.

To learn more about how we can help your organisation, contact us.

References

Posted in Informatics | Tagged , , | 2 Comments

Cachexia Update

A few years ago I wrote an article on cachexia, the muscle-wasting co-morbidity that is common in 30% of cancer patients. The article continues to get a lot of hits, so I thought I would do an update with some recent findings.

052a4b2e88a2cc280dc9652e46a68da5-patrick-swayzee-special-people

Patrick Swayze Exhibiting Signs of Cachexia

Although cachexia is a relatively common condition, it is understudied because of a lack of research funding. Cachexia can weaken diaphragm and heart muscles thereby accelerating a patient’s demise. However, a recent article in Nature Medicine indicated that the gene ZIP14 (SLC29A14) may hold the key to helping reduce the effects of cachexia and extending the life of cancer patients. ZIP14 is overexpressed in cachectic patients, and resulted in an excess of zinc in muscle tissues. They were able to successfully inhibit ZIP14.

One thing I would like to clarify. While searching for an image for this article I came across some scurrilous web pages that tried to state that the reason that Patrick Swayze, Steve Jobs and others looked so gaunt was due to chemotherapy. This is absolutely FALSE. Their appearance is due in large part to the effects of cachexia and has nothing to do with chemo or other treatments.

Please donate to organisations like Stand Up 2 Cancer and the Lustgarten Foundation and help make a difference in the lives of cancer patients.

References

  1. Gang Wang, Anup K. Biswas, Wanchao Ma, Manoj Kandpal, Courtney Coker, Paul M. Grandgenett, Michael A. Hollingsworth, Rinku Jain, Kurenai Tanji, Sara Lόpez-Pintado, Alain Borczuk, Doreen Hebert, Supak Jenkitkasemwong, Shintaro Hojyo, Ramana V. Davuluri, Mitchell D. Knutson, Toshiyuki Fukada, Swarnali Acharyya. Metastatic cancers promote cachexia through ZIP14 upregulation in skeletal muscleNature Medicine, 2018; 24 (6): 770 DOI: 10.1038/s41591-018-0054-2
Posted in Cancer Research, genomics, pancreatic cancer | Tagged , | Leave a comment