About Achievers Spot
 
Achievers Spot is a Well-Established Recruiting Firm in Chennai for Dedicated, Hard Working and Focused Individuals. We are looking for Life Science Graduates with Dedicated, Hardworking and Focused Individuals. We Offer Medical Coding Trainee Jobs in Leading US Healthcare BPO’s.
 
What is Medical Coding?
 
Medical Coding is the process of conversion of text information related to healthcare services into numeric Diagnosis (Medical Problems) and Procedure (Treatments) Codes using ICD-9 CM and CPT code books.
Healthcare, including Medical Coding and Billing, will generate three million new jobs through 2016. That's more than any Other Industry.
Healthcare Providers need efficient Medical Coders for HIPAA Compliant Claims filing and reimbursement.
 
Eligibility:
 
 
Any Bioinformatics Graduates and Post Graduates (B.E. Bioinformatics, B.Sc. Bioinformatics, M.Sc. Bioinformatics, M.E. Bioinformatics)
Pay Scale: 9000/- to 13000 per month initially with assured career growth (Incentives & Benefits as per Corporate Standards)
 
Career Growth:
Excellent opportunity to enhance your career by getting CPC(Certified Association of Professional Coders) and AHIMA(American Health Information Management Professional Coders) and CCS(Certified Coding Specialist) Certification from AAPC(American Association) respectively.
 
CPC, CCS - P Certification Training is also provided for Freshers and Experienced Coders.
 
Contact Details:
Achievers Spot
13, Ramanathan Street,2nd Floor, T.Nagar, Chennai – 600017
Landmark: Adjacent to Ranganathan Street
Ph: 9840708203/9566157632/9566157627/9361143143, 044-42126317/42057586/45001158
Email: meenag(at)achieversspot.com

Post Details
Job TitleUrgent Jobs For Bioinformatics Graduates For Medical Coding In Chennai
ClassificationPharma, Healthcare & Biotechnology Jobs
Job TypeFull-time
Job FunctionMedical Coding Trainee
Location13, Ramanathan Street,2nd Floor, T.Nagar, Chennai 600017
Country1_India
Job Salary9000-13000/PM
Company and Contact Details
CompanyAchievers Spot
Company Websitehttp://www.achieversspot.com
Company Profile 
About Achievers Spot
 
Achievers Spot is a Well-Established Recruiting Firm in Chennai for Dedicated, Hard Working and Focused Individuals. We are looking for Life Science Graduates with Dedicated, Hardworking and Focused Individuals. We Offer Medical Coding Trainee Jobs in Leading US Healthcare BPO’s.
Contact Person NameMeena Sharma

FierceBigData
TechCrunch reported this week that startup Bina Technologies raised another $6.5 million in Series B funding to help it continue its efforts to help research universities, pharmaceutical companies and clinicians can get access to sequence data that they can decipher.
In February, Bio-IT World reported on Bina's launch of its Genomic Analysis Platform, which helps solve what has quickly become a big data problem in sequencing. The Bina Box is an on-premises hardware and software solution that sits alongside genetic sequencers and captures the files streaming off them, which contain approximately a half-terabyte per sequence. The company said that the box handles assembly and alignment of raw reads and variant calling, which are uploaded to the cloud for comparative analysis, disease-association studies, aggregation, mining, and other downstream analysis.

Webinar: Videoconferencing security

Date: April 11, 2 pm ET / 11 am PT Videoconferencing presents security challenges that differ from traditional communications and data security methods that rely on firewalls and user authentication.
More important is that is lowers the cost significantly. Other companies such as DNAnexus and Seven Bridges Genomics have developed similar technology.Another startup out of Seattle, Spiral Genetics, raised $3 million this month for its service that helps researchers in academia and industry more quickly analyze raw sequence data. Company Co-founder and CEO Adina Mangubat said when her team realized the speed and volume with which raw sequence data was being generated, she saw an opportunity in high-performance bioinformatics tools.
According to an article in Nature last week, the market for bioinformatics services and software may soon surpass that for sequencing technologies, as online bioinformatics companies compete to bring genomics platforms and software to hospitals. Even as the cost of sequencing comes down, the infrastructure costs for hospitals and others to analyze the data on their own is still prohibitive.
So, companies are turning to the cloud where they can upload a client's or patients sequencing data and run analysis without the cost of infrastructure. The resulting data can also be shared between doctors or scientists without having to transfer huge files. Nature says this will create a new crop of genetics interpretation and analysis firms.



Job Description
Key Skills: 1)Candidate should be from Bioinformatics Background 2)Must possess Knowledge about Medical Terminologies 3)Must possess Good Written & Verbal Skills

About Achievers SpotAchievers Spot is a Well-Established Recruiting Firm in Chennai for Dedicated, Hard Working and Focused Individuals. We are looking for Life Science Graduates with Dedicated, Hardworking and Focused Individuals. We Offer Medical Coding Trainee Jobs in Leading US Healthcare BPOs.What is Medical Coding Medical Coding is the process of conversion of text information related to healthcare services into numeric Diagnosis (Medical Problems) and Procedure (Treatments) Codes using ICD-9 CM and CPT code books.
Healthcare, including Medical Coding and Billing, will generate three million new jobs through 2016. That's more than any Other Industry. Healthcare Providers need efficient Medical Coders for HIPAA Compliant Claims filing and reimbursement.Eligibility:Any Bioinformatics Graduates and Post Graduates (B.E. Bioinformatics, B.Sc. Bioinformatics, M.Sc. Bioinformatics, M.E. Bioinformatics)Pay Scale: 9000/- to 13000 per month initially with assured career growth (Incentives & Benefits as per Corporate Standards)Career Growth:
Excellent opportunity to enhance your career by getting CPC(Certified Association of Professional Coders) and AHIMA(American Health Information Management Professional Coders) and CCS(Certified Coding Specialist) Certification from AAPC(American Association) respectively.CPC, CCS - P Certification Training is also provided for Freshers and Experienced Coders.Placement Locations: Chennai, Trichy, Bangalore & HyderabadPlacement Details:Placement is provided to All Candidates Successfully Completing the Training Program in Leading Healthcare MNC BPOsContact Details:
Achievers Spot
13, Ramanathan Street,2nd Floor, T.Nagar, Chennai 600017
Landmark: Adjacent to Ranganathan Street
Ph: 9840708203/9566157632/9566157627/9361143143, 044-42126317/42057586/45001158
Email: meenag@achieversspot.com
Website: www.achieversspot.com
Company Profile
About Achievers SpotAchievers Spot is a Well-Established Recruiting Firm in Chennai for Dedicated, Hard Working and Focused Individuals. We are looking for Life Science Graduates with Dedicated, Hardworking and Focused Individuals. We Offer Medical Coding Trainee Jobs in Leading US Healthcare BPOs.

Contact Details
Name: MeenaSharma
Phone Number:

For an ambitious joint Synergy-ERC program with the Welcome Trust Sanger Institute, the Netherlands Cancer Institute is looking for Postdocs in Bioinformatics.

Specifications - (explanation)
Location Amsterdam, the Netherlands
Function typesPostdoc positions
Scientific fieldsNatural Sciences, Engineering, Health
Hours 40.0 hours per week
EducationUniversity Graduate
Job number AT Postdocs Bioinformatics
Translations
Apply for this job within 14 days

Job description

It is the ambition of this international team to unravel the genomic and phenotypic complexity of human cancers in order to identify optimal drug combinations for personalized cancer therapy. Our integrated approach will entail (i) deep sequencing of human tumours and cognate mouse tumours; (ii) drug screens in a 1000+ fully characterized tumour cell line panel; (iii) in vitro and in vivo shRNA and cDNA drug resistance and enhancement screens; (iv) computational analysis for response predictions; (v) validation in genetically engineered mouse models and patient-derived xenografts. This integrated effort is expected to yield novel combination therapies and companion-diagnostics biomarkers that will be further explored in our existing clinical trial networks.
The expected duration of the project is 6 years.
For extra information on the program, see the NKI page.

Requirements

We invite excellent and ambitious postdocs with expertise in genomic analysis, high throughput robotic drug and shRNA screening, molecular and cell biology as well as with expertise in bioinformatics and mouse models systems.
Since this is a close collaboration between groups at the two institutes in Amsterdam and Cambridge we expect candidates to participate in regular visits, and prepared to spend short periods in the group(s) of our collaborators.

Conditions of employment

We offer a stimulating and interactive research environment, free use of all state-of-the-art facilities, a competitive salary (including possibilities for additional tax-reduction) and housing facilities in the vicinity of the Institute. You may be appointed for a period up to 5 years.
Contract type: Temporary, Up to 5 years.

Organisation

The Netherlands Cancer Institute is an independent research institute located in the lively city of Amsterdam. The Institute covers all major areas of molecular and cellular cancer biology, with special emphasis on mouse tumor models, functional screens, cancer cell biology and translational research.
The Netherlands Cancer Institute and the Antoni van Leeuwenhoek Hospital form an integrated cancer center, combining 54 research groups and a hospital under one roof in a single, independent organization. All hospital departments have an extensive research program, often in close collaboration with the research groups. This research is focused on improving cancer treatment through imaging and molecular diagnostics, new medicines, improved operating techniques, more effective radiotherapy and combinations of these, epidemiology and psychosocial research. Approximately 550 people work in the research laboratories and many of the clinicians are involved in research. Work discussions, lectures and seminars are in English and a large number of international post-docs, students and staff members contribute to the stimulating and international atmosphere of the Institute.

Additional information

For further information you can contact Anton Berns, Daniel Peeper, Jos Jonkers and Lodewyk Wessels: +31 (0) 20 5129134
Please submit your application before April 15 with mentioning of the specific position(s) and job number you want to apply for through the application link shown below.
For more information on the other openings within the program, see the NKI page.
More information about employer The Netherlands Cancer Institute on AcademicTransfer. Direct link to this job opening: www.academictransfer.com/17898

What are the intersections between biomedicine and humanities scholarship? How might biomedical research methodologies influence humanities inquiry? What interpretative processes might humanities scholarship share with biomedical research?
The Maryland Institute for Technology in the Humanities (MITH) invites biomedical and humanities scholars to join us in investigating data, biomedicine, and the digital humanities.
Opening Reception:
Wednesday, April 10, 2013

Symposium:
Thursday, April 11
Room 6137 (Special Events) McKeldin Library

Friday, April 12, 2013
Cafritz Lecture Hall
Clarice Smith Performing Arts Center

APPLY NOW
Check out our latest news:
Keynote Lecturer: David B. Searls
The MITH Blog
The NEH Press Release
The NLM Press Release
Brett Bobley’s recent blog on how Shared Horizons came to be.


Kahn Technologies has developed an algorithm for detecting an EEG signal for diagnosing schizophrenia. We are looking for a data mining expert, preferably with experience in oscillatory dynamics, to help in writing grants and to implement a data-mining system.

Kahn Technologies and Senesyshave developed an algorithm for detecting a biomarker which can be used to diagnose schizophrenia. An EEG (electroencephalograph) is used to collect the oscillatory brain dynamics signals in response to certain stimuli - from this data we are able to extract a specific artifact prevalent among participants with schizophrenia.

Because of the large quantity of data gathered from the many trials that have been conducted and that will be conducted, we need someone to implement a data-mining system.
Normal vs Schizophrenia EEG signal
Over the next two years we will be conducting clinical trials and further developing the technology. During that time we will need a consultant to help with the following:
  • consulting on grant writing as it relates to the database and data-mining
  • general technical consulting on the data mining
  • implementing the data-mining system
Currently the technology and algorithms identify a biomarker for the diagnosis of schizophrenia. With data-mining capabilities, we hope to be able to identify additional artifacts which can be used as biomarkers for the diagnosis and treatment of schizophrenia and possibly for the diagnosis and treatment of other disorders including Parkinson's, Alzheimer's, Huntington's and PTSD. As the scope of this technology expands, so do the demands on the database and data-mining capability.
The project requires a database that can store and make accessible all EEG data, and data-mining capabilities including the following:
  • identify previously unseen artifacts in the data
  • compare artifacts across different subsets of pre-diagnosed patients
  • test the efficacy of treatment and medication
As the project moves forward, we will be applying for several grants. We are looking for a consultant who can help with the grant writing process as it relates to database and data-mining, and helping the rest of the team understand the feasibility of current data mining technology.
We are looking for a data mining expert, preferably with experience in oscillatory dynamics. Experience with schizophrenia or other disorders is not necessary.
If interested, please email us through kahn.technologies@gmail.com
Thank you for your time!
Kahn Technologies

  1. Price: $69.95Seller: Wal-Mart2013-03-29
Enlarge

Product Description

Evolutionary Computation, Machine Learning and Data Mining in Bioinformatics: 6th European Conference, Evobio 2008, Naples, Italy, March 26-28, 2008, This book constitutes the refereed proceedings of the 6th European Conference on Evolutionary Computation, Machine Learning and Data Mining in Bioinformatics, EvoBIO 2008, held in Naples, Italy, in March 2008 colocated with the Evo* 2008 events. Topics addressed by the papers include biomarker discovery, cell simulation and modeling, ecological modeling, uxomics, gene networks, biotechnology, metabolomics, microarray analysis, phylogenetics, protein interactions, proteomics, sequence analysis and alignment, as well as systems biology. The 18 revised full papers were carefully reviewed and selected from 63 submissions. EvoBio is the premiere European event for experts in computer science meeting with experts in bioinformatics and the biological sciences, all interested in the interface between evolutionary computation, machine learning, data mining, bioinformatics, and computational biology.

Wal-Mart Promotion

Free Shipping with Site-to-Store in Home at Walmart.com!

Product Info

UPC
97835407875632
Part Number
9783540787563
SKU
9783540787563
Shipping
2.97, 2.97 USD

Title
Evolutionary Computation, Machine Learning and Data Mining in Bioinformatics: 6th European Conference, Evobio 2008, Naples, Italy, March 26-28, 2008,

Author
ISBN
3540787569

Evolutionary Computation, Machine Learning and Data Mining in Bioinformatics: 6th European Conference, Evobio 2008, Naples, Italy, March 26-28, 2008,

Short URL of this page:
As of March 29, 2013, 11:30 pm (*Pricing disclaimer: Product prices and availability are accurate as of the date/time indicated and are subject to change without notice. Any price and availability information displayed on the original websites, as applicable at the time of purchase will apply to the purchase of related products.)
Offer and use of product information are provided and authorized by Wal-Mart. Shop Now

From simple charts to complex maps and infographics, Brian Suda's round-up of the best – and mostly free – tools has everything you need to bring your data to life
One of the most common questions I get asked is how to get started with data visualisations. Beyond following blogs, you need to practise – and to practise, you need to understand the tools available. In this article, I want to introduce you to 20 different tools for creating visualisations: from simple charts to complex graphs, maps and infographics. Almost everything here is available for free, and some you have probably installed already.

Entry-level tools

At the entry level, we'll be looking at unexpected uses for familiar tools. You might not think of Excel as a visualisation package, for example – but it's capable of surprisingly complex results. If you are just getting started, these tools are musts to understand. If you deal with visualisations every day, you'll quickly find yourself advancing beyond them, but not everyone will, so you'll always be dealing with data coming in from sources you'd rather not deal with.

1. Excel

 for example, by creating 'heat maps' like this one
It isn't graphically flexible, but Excel is a good way to explore data: for example, by creating 'heat maps' like this one
You can actually do some pretty complex things with Excel, from 'heat maps' of cells to scatter plots. As an entry-level tool, it can be a good way of quickly exploring data, or creating visualisations for internal use, but the limited default set of colours, lines and styles make it difficult to create graphics that would be usable in a professional publication or website. Nevertheless, as a means of rapidly communicating ideas, Excel should be part of your toolbox.
Excel comes as part of the commercial Microsoft Office suite, so if you don't have access to it, Google's spreadsheets – part of Google Docs and Google Drive – can do many of the same things. Google 'eats its own dog food', so the spreadsheet can generate the same charts as the Google Chart API. This will get your familiar with what is possible before stepping off and using the API directly for your own projects.

2. CSV/JSON

CSV (Comma-Separated Values) and JSON (JavaScript Object Notation) aren't actual visualisation tools, but they are common formats for data. You'll need to understand their structures and how to get data in or out of them. All of the following toolkits accept at least one of the two formats as input.

Online visualisations

3. Google Chart API

The portion of the toolset for static images has been deprecated, so the Google Chart Tools site now only offers tools for dynamic charts. They are robust and work on all browsers supporting SVG, canvas and VML, but one big problem is that they are generated on the client side, which creates problems for devices without JavaScript, offline use – or just when saving in different formats. Static images didn't have the same issues, so I'm sorry to see them go.
However, the API has just about everything but the kitchen sink, from bar charts and line graphs to maps and even QR codes. You will probably find the right visualisation for your needs as long as you are comfortable with the Google look and not in need of extreme customisation. As a jumping-off point, it is a great tool to know how to use.
The portion for static images has been deprecated, but the Google Chart API is a good way to create dynamic visualisations
The portion for static images has been deprecated, but the Google Chart API is a good way to create dynamic visualisations

4. Flot

Flot is a great library for line graphs and bar charts. It works in all browsers that support canvas – which means most of the popular ones, with some extra libraries to get canvas to work as VML in older browsers. It's a jQuery library, so if you're already familiar with jQuery, it's easy to manipulate the calls back, styling and behaviour of the graphics.
The nice thing about Flot is that you have access to plenty of callback functions so you can run your own code and style the results when readers hover, click, mouseout, and other common events. This gives you much more flexibility than other charting packages, but there is a steeper learning curve. Flot is also limited to line and bar charts. It doesn't have as many options as other libraries, but it performs common tasks really well.
It's specialised on line and bar charts, but if you know jQuery, Flot is a powerful option
It's specialised on line and bar charts, but if you know jQuery, Flot is a powerful option

5. Raphaël

Raphaël is another great JavaScript library for creating charts and graphs. The biggest difference to other libraries is that it focuses on SVG and VML as output. This has pros and cons. Since SVG is a vector format, the results look great at any resolution; but since it creates a DOM node for each element, it can be slower than creating rasterised images via canvas. However, the upside is that you can interact with each DOM element and attach events, just like HTML.
The website includes plenty of demos to show how easily Raphaël can create common charts and graphs but, because it can also render arbitrary SVG, it has the ability to create some very complex visualisations for which you might otherwise have to resort to other vector tools such as Illustrator or Inkscape.
 slower than raster-based tools, but it's capable of complex results
Raphaël is a great way to create vector-based charts: slower than raster-based tools, but it's capable of complex results

6. D3

D3 (Data-Driven Documents) is another JavaScript library that supports SVG rendering. The examples go beyond the simple bar charts and line graphs to much more complicated Voronoi diagrams, tree maps, circular clusters and word clouds. It's another great tool to have in your toolbox, but I wouldn't always recommend D3 as the go-to library. It's great for creating very complicated interactions – but just because you can, it doesn't mean you should. Knowing when to stay simple is a big part of choosing the right visualisation tool.
D3 is capable of creating very complex output – but it's best saved for special cases, not everyday use
D3 is capable of creating very complex output – but it's best saved for special cases, not everyday use

7. Visual.ly

If you are in need of an infographic rather than a data visualisation, there is a new crop of tools out there to help. Visual.ly is probably the most popular of these. Although primarily an online marketplace for infographic designers, its Create option lets you pick a template, connect it to your Facebook or Twitter account and get some nice cartoon graphics back. While the results are currently limited, it's a useful source of inspiration – both good and bad – and a site I expect to see grow in future, accepting more formats and creating more interesting graphics.
Visual.ly acts both as an online marketplace and simple creation tool for infographics
Visual.ly acts both as an online marketplace and simple creation tool for infographics

Interactive GUI controls

What happens when data visualisations become so interactive they themselves become GUI controls? As online visualisations evolve, buttons, drop-downs and sliders are morphing into more complex interface elements, such as little handles that let you manipulate ranges, changing the input parameters and the data at the same time. Controls and content are becoming one. The following tools will help you explore the possibilities this offers.

8. Crossfilter

As we build more complex tools to enable clients to wade through their data, we are starting to create graphs and charts that double as interactive GUI widgets. JavaScript library Crossfilter can be both of these. It displays data, but at the same time, you can restrict the range of that data and see other linked charts react.
 by restricting the input range on any one chart, data is affected everywhere. This is a great tool for dashboards or other interactive tools with large volumes of data behind them
Crossfilter in action: by restricting the input range on any one chart, data is affected everywhere. This is a great tool for dashboards or other interactive tools with large volumes of data behind them

9. Tangle

The line between content and control blurs even further with Tangle. When you are trying to describe a complex interaction or equation, letting the reader tweak the input values and see the outcome for themselves provides both a sense of control and a powerful way to explore data. JavaScript library Tangle is a set of tools to do just this. Dragging on variables enables you to increase or decrease their values and see an accompanying chart update automatically. The results are only just short of magical.
Tangle creates complex interactive graphics. Pulling on any one of the knobs affects data throughout all of the linked charts. This creates a real-time feedback loop, enabling you to understand complex equations in a more intuitive way
Tangle creates complex interactive graphics. Pulling on any one of the knobs affects data throughout all of the linked charts. This creates a real-time feedback loop, enabling you to understand complex equations in a more intuitive way

Mapping

Mapping used to be a really hard task on the web. Then Google Maps came along and blew away every preconceived notion of how an online map should work. Soon after, Google released its Maps API, which allowed any developer to embed maps in their own sites.
Since then, the market has matured a great deal. There are now several options out there if you are looking to embed custom mapping solutions in your own data visualisation project, and knowing when to choose one over the others is a key business decision. Sure, you can probably shoehorn everything you need into any of these maps, but it's best not to have a hammer and view every problem as a nail.

10. Modest Maps

Modest Maps is a tiny mapping library. Weighing in at only 10kB, it is the smallest of options discussed here. This makes it very limited in its basic form – but don't let that fool you: with a few extensions, such as Wax, you can really make this library sing. This is a product of Stamen, Bloom and MapBox, so you know it has an interesting track record.
Teamed with additional libraries, such as MapBox's Wax (pictured), the tiny Modest Maps becomes a powerful tool
Teamed with additional libraries, such as MapBox's Wax (pictured), the tiny Modest Maps becomes a powerful tool

11. Leaflet

Brought to you by the CloudMade team, Leaflet is another tiny mapping framework, designed to be small and lightweight enough to create mobile-friendly pages. Both Leaflet and Modest Maps are open source projects, which makes them ideal for using in your own sites: with a strong community backing them, you know they won't disappear any time soon.
Leaflet is a small, lightweight JavaScript library ideal for mobile-friendly projects
Leaflet is a small, lightweight JavaScript library ideal for mobile-friendly projects

12. Polymaps

Polymaps is another mapping library, but it is aimed more squarely at a data visualisation audience. Offering a unique approach to styling the the maps it creates, analagous to CSS selectors, it's a great resource to know about.
Aimed more at specialist data visualisers, the Polymaps library creates image and vector-tiled maps using SVG
Aimed more at specialist data visualisers, the Polymaps library creates image and vector-tiled maps using SVG

13. OpenLayers

OpenLayers is probably the most robust of these mapping libraries. The documentation isn't great and the learning curve is steep, but for certain tasks nothing else can compete. When you need a very specific tool no other library provides, OpenLayers is always there.
It isn't easy to master, but OpenLayers is arguably the most complete, robust mapping solution discussed here
It isn't easy to master, but OpenLayers is arguably the most complete, robust mapping solution discussed here

14. Kartograph

Kartograph's tag line is 'rethink mapping' and that is exactly what its developers are doing. We're all used to the Mercator projection, but Kartograph brings far more choices to the table. If you aren't working with worldwide data, and can place your map in a defined box, Kartograph has the options you need to stand out from the crowd.
Kartograph's projections breathe new life into our standard slippy maps
Kartograph's projections breathe new life into our standard slippy maps

15. CartoDB

Finally, CartoDB is a must-know site. The ease with which you can combine tabular data with maps is second to none. For example, you can feed in a CSV file of address strings and it will convert them to latitudes and longitudes and plot them on a map, but there are many other users. It's free for up to five tables; after that, there are monthly pricing plans.
CartoDB provides an unparalleled way to combine maps and tabular data to create visualisations
CartoDB provides an unparalleled way to combine maps and tabular data to create visualisations

Charting fonts

One recent trend in web development is to merge symbol fonts with font embedding to create beautifully vectorised icons. They scale and print perfectly, and look great on newer Retina devices too. A few of these fonts, such as FF Chartwell and Chartjunk, have been specially crafted for the purpose of displaying charts and graphs. They have the usual problem of OpenType not being fully supported in all browsers, but they're something to consider in the near future.

Getting serious

If you're getting serious about data visualisations, you need to move beyond simple web-based widgets onto something more powerful. This could mean desktop applications and programming environments.

16. Processing

Processing has become the poster child for interactive visualisations. It enables you to write much simpler code which is in turn compiled into Java. There is also a Processing.js project to make it easier for websites to use Processing without Java applets, plus a port to Objective-C so you can use it on iOS. It is a desktop application, but can be run on all platforms, and given that it is now several years old, there are plenty of examples and code from the community.
Processing provides a cross-platform environment for creating images, animations, and interactions
Processing provides a cross-platform environment for creating images, animations, and interactions

17. NodeBox

NodeBox is an OS X application for creating 2D graphics and visualisations. You need to know and understand Python code, but beyond that it's a quick and easy way to tweak variables and see results instantly. It's similar to Processing, but without all the interactivity.
NodeBox is a quick, easy way for Python-savvy developers to create 2D visualisations
NodeBox is a quick, easy way for Python-savvy developers to create 2D visualisations

Pro tools

At the opposite end of the spectrum from Excel are professional data-analysis tools. If you are serious about data visualisation, you need to be at least aware of, if not proficient in, some of these. Industry-standard tools such as SPSS and SAS require expensive subscriptions, so only large institutions and academia have access to them, but there are several free alternatives with strong communities. The open-source software is just as good, and the plug-ins and support are better.

18. R

How many other pieces of software have an entire search engine dedicated to them? A statistical package used to parse large data sets, R is a very complex tool, and one that takes a while to understand, but has a strong community and package library, with more and more being produced. The learning curve is one of the steepest of any of these tools listed here, but you must be comfortable using it if you want to get to this level.
A powerful free software environment for statistical computing and graphics, R is the most complex of the tools listed here
A powerful free software environment for statistical computing and graphics, R is the most complex of the tools listed here

19. Weka

When you get deeper into being a data scientist, you will need to expand your capabilities from just creating visualisations to data mining. Weka is a good tool for classifying and clustering data based on various attributes – both powerful ways to explore data – but it also has the ability to generate simple plots.
A collection of machine-learning algorithms for data-mining tasks, Weka is a powerful way to explore data
A collection of machine-learning algorithms for data-mining tasks, Weka is a powerful way to explore data

20. Gephi

When people talk about relatedness, social graphs and co-relations, they are really talking about how two nodes are related to one another relative to the other nodes in a network. The nodes in question could be people in a company, words in a document or passes in a football game, but the maths is the same. Gephi, a graph-based visualiser and data explorer, can not only crunch large data sets and produce beautiful visualisations, but also allows you to clean and sort the data. It's a very niche use case and a complex piece of software, but it puts you ahead of anyone else in the field who doesn't know about this gem.
Gephi in action. Coloured regions represent clusters of data that the system is guessing are similar
Gephi in action. Coloured regions represent clusters of data that the system is guessing are similar

Further reading

  • A great Tumblr blog for visualisation examples and inspiration: vizualize.tumblr.com
  • Nicholas Felton's annual reports are now infamous, but he also has a Tumblr blog of great things he finds.
  • From the guy who helped bring Processing into the world: benfry.com/writing
  • Stamen Design is always creating interesting projects: stamen.com
  • Eyeo Festival brings some of the greatest minds in data visualisation together in one place, and you can watch the videos online.

 
 Bioinformatics Focus On Analytical Methods (FOAM) 2013 was run as part of CSIRO’s Computational and Simulation Sciences and eResearch Annual Conference and Workshops, and sponsored by the CSIRO Bioinformatics Core and The Australian Bioinformatics Network (ABN).
The first half of FOAM 2013 was aimed at CSIRO bioinformaticians, computational biologists and quantitative bioscientists, recognising that this is a once-a-year opportunity for staff across Australia to get together to discuss CSIRO-specific issues.
The second half of the meeting wass aimed at bioinformaticians, computational biologists and quantitative bioscientists in general. Feedback to the ABN indicated a preference to hold bioinformatics-oriented meetings in conjunction with other events, rather than initiating a standalone conference (at least for the time being). CSIRO’s CSS conference gives us a great opportunity to hold a very affordable (i.e., free to members) ABN event at a great location in a citywith a high concentration of Australian life-science research.
We saw a diverse and engaging agenda of presentations, reflecting the breadth of research that falls under the heading “bioinformatics”. We encourage you to get a sense of the event by checking out those presentations uploaded to the Australian Bioinformatics Network Slideshare: http://www.slideshare.net/AustralianBioinformatics/tag/bioinformatics-foam-2013



Applying Bioinformatics to Precision Medicine

"You know that only in the future will we have the methodologies to infer these functionalities and to be able to assign and interpret [the genome] at the clinical level."—Fátima Al-Shahrour
Someday, it should be possible for doctors to send individual cancer patients in for a genomic analysis and, based on the results, prescribe the drug they know will be the most effective. While the promise of this kind of personalized medicine is still distant, researchers like Fátima Al-Shahrour, head of theTranslational Bioinformatics Unit in the clinical research program at the Spanish National Cancer Research Center(CNIO) in Madrid, are working on it now, interpreting the genomes of individual cancer patients and searching for clues to how they will respond to various treatments. The field, which is known as cancer pharmacogenomics, is still in its infancy, but Al-Shahrour believes that "in the future [it] could benefit many people."
Trained as an organic chemist, molecular biologist, and bioinformaticist, Al-Shahrour sees many exciting opportunities for early-career scientists who are willing to work at the crossroads where biomedical research, bioinformatics and computational biology, and clinical research meet. "Medicine is going into that direction, so every hospital, every clinician, every laboratory in the future is going to need people who can interpret those results," she says.

From cancer mutations to personalized treatments

The CNIO research that Al-Shahrour is involved in starts when her clinical colleagues recruit cancer patients for whom conventional treatments have been exhausted. A small group of cancer patients are currently involved in CNIO's search for alternative treatment options, with a variety of cancers including melanoma, glioblastoma, and pancreatic cancer. Once they're recruited, experimental biologists perform a genomic profile of each patient, sequencing the exomes—the coding portion of the genomes—of individual tumors.
As the lead bioinformaticist on the team, Al-Shahrour’s role is to analyze the genomic data in search of mutations. She attempts to match the mutations with existing literature and databases to predict how a patient is likely to respond to nonconventional drugs. This work can, in turn, inform treatment decisions in the clinic.
Her work also feeds into an experimental approach to treatment, helping biologists decide which drugs to test in animal models of the patients’ tumors—xenograft mice used as proxies, or avatars, as the team calls them—to test how effective particular drugs could be in particular patients. The idea is that the treatments that the mouse responds to best can then be administered to the patient.
Beyond the treatment of current patients, Al-Shahrour is helping to develop a database of novel mutations and their associations to drug responses, together with computational methodologies, that could help predict drug responses in future patients. Now, "we find key mutations in genes that are expected to be mutated, but then we find many other mutations," which need to be tested in avatars to determine whether they might be clinically relevant. Eventually, she hopes to reach the point where, "if we find these mutations or any similar ones [in new patients] … we can give them a treatment that has already been given to another patient with a similar genomic profile."
The study is still in its pilot phase. Avatars have been successfully created for about half the patients, and some patients have been treated with nonconventional drugs that seemed promising in their avatars. "As a proof of concept, it has worked for a few patients," Al-Shahrour says—but more successes are needed to put the approach on firmer footing, she adds.
The stakes are high. For cancer patients who do not respond or relapse after conventional therapy, "there is no other treatment except to include them in a clinical trial or [offer them] this possibility that we are putting in place."

Special Issue: Cancer Genomics

collection of articles fromScienceScience Signaling, andScience Careers examines how a whole-genome approach is shaping our understanding of cancer.

At the heart of translation

In this multidisciplinary project, one role that Al-Shahrour plays is to ensure that genomic information reaches clinicians in a useful form. You can’t just give a medical doctor making treatment decisions a list of 3000 mutations found in those patients' tumors, she says. "Rather, you have to give him … five genes that are potentially important based on their clinical relevance."
Al-Shahrour also uses the interpretation of genomic data to bridge bioinformatics and biomedical research. Once a novel mutated gene has been identified in avatars, it's necessary to study its biological function to confirm whether "this is the gene [causing] this predisposition to be sensitive to this treatment," Al-Shahrour says. She contributes to the discussions and decisions about which experiments to prioritize.
Another aspect of Al-Shahrour’s job is to provide bioinformatics and computational support to clinicians and experimental biologists. She works at the interface of the computational biology and clinical research programs at CNIO, which puts her "in a unique, critical and challenging position," writes Alfonso Valencia, the director of the institute’s Structural Biology and Biocomputing Programme, in an e-mail to Science Careers. "Her group is responsible [for] organizing, digesting and analyzing the vast amount of data produced by the clinicians, including genomics and medical information, as well as the results of the analysis of xenografts. … A task for which she has the support of my group but also the implicit task of pushing us to streamline and optimize our tools and methods to fit the needs of the analysis."
But the biggest challenge for Al-Shahrour is the now very limited knowledge of the functionality of the genome. Finding mutations that you know neither the cause nor the effects of is frustrating. "[Y]ou know that only in the future will we have the methodologies to infer these functionalities and to be able to assign and interpret [the genome] at the clinical level," Al-Shahrour says.

A new breed of scientist

According to Valencia, the job that Al-Shahrour does requires a very wide range of knowledge and skills; he emphasizes her "biological background, capacity to develop bioinformatics methods, deep understanding of genomics, good communication skills and proved record in team management." Also important, he adds, is her clear understanding of the limitations of the experimental and computational techniques.
But what Al-Shahrour herself sees as her most important asset is her broad view of the field, encompassing the development of computer tools, databases, and computational methodologies and their use to study genes, cell lines, and patients. She developed this broad view via a series of career steps, first obtaining a B.Sc. degree in organic chemistry. She began her Ph.D. studies in molecular biology at CNIO in 2002, just as the use of DNA microarrays was becoming mainstream and methods were being developed for the functional analysis of genome-scale experiments. Under the supervision of Joaquín Dopazo, she worked on computational methodologies for microarray gene expression analysis, integrating databases and applying statistical tools to inferring which genes were most functionally relevant.
After her Ph.D., Al-Shahrour went on to work at the Broad Institute in Cambridge, Massachusetts, in the computational biology and bioinformatics lab of Chief Informatics Officer Jill Mesirov, where she worked closely with cancer computational biologists, one of whom was Pablo Tamayo. There, she "pioneered the use of molecular signatures to characterize the cellular state of cancer cells. This included projecting a variety of datasets in the space of genes induced by a variety of oncogenes," Tamayo writes in an e-mail to ScienceCareers. In October 2008, Al-Shahrour joined the lab of clinician-scientist Benjamin Ebert at Brigham and Women’s Hospital in Boston as a staff computational biologist, studying cancer biology and treatment using hematopoiesis as a model system. During her time in Cambridge and Boston, Al-Shahrour says, she learned how to work in large multidisciplinary groups, and clinical exposure taught her that, for bioinformaticists, the job isn't just to analyze the data but also to design the studies and interpret the data.
Early-career scientists who wish to follow in her footsteps must be ready to embrace the training challenges. Tamayo writes: "My advice to them is to study mathematics (not only old statistics but also advanced probability), information theory, machine learning, programming, numerical methods, chemistry, physics, cellular biology and biochemistry. It is important not only to be able to talk to multiple domain experts, and develop a solid hard-core analytical mind frame to cast problems, but also to have access to a rich set of paradigms about how to deal with complexity." Cancer pharmacogenomics is "a particularly demanding field that requires a lot of flexibility and adaptability in terms of what problems one solves over time and in requiring to learn from many fields of expertise," he adds.  
The challenges in the field are considerable, but Tamayo, Valencia, and Al-Shahrour all see great promise in cancer pharmacogenomics as an approach to treatment and as a career. "For the first time we are analyzing real data, that is, data from patients," Al-Shahrour says. She has moved from searching for mutations largely for the sake of knowledge to interpreting the genome to more directly help patients, which is, to her, the most exciting part of her research.
Elisabeth Pain is contributing editor for Europe.
10.1126/science.caredit.a1300050

MARI themes

{facebook#YOUR_SOCIAL_PROFILE_URL} {twitter#YOUR_SOCIAL_PROFILE_URL} {google#YOUR_SOCIAL_PROFILE_URL} {pinterest#YOUR_SOCIAL_PROFILE_URL} {youtube#YOUR_SOCIAL_PROFILE_URL} {instagram#YOUR_SOCIAL_PROFILE_URL}
Powered by Blogger.