Anyone have any insight into why Consumer Reports found a major decline in the fuel system's reported reliability for 2012 models?
I know that CR bases this on owner reports. What I don't know is the reason people are reporting problems.
Overall they found Much Better than Average in all categories of, "Trouble Spots by Year" for 2012 except for, "Fuel System." In that one category they report owners rating the 2012 Ridgeline, "Much Worse than Average."
Reason I'm asking: I'm looking at replacing my 2006 RTL Navi, which has been fantastic, with a 2013. If there is some significant problem though, I might skip that idea. Interestingly, they found three areas of the 2006 that were Better than Average, and all others Much Better than Average. You might say that in the 2012 reliability reports, having everything Much Better than Average with one category Much Worse than Average is a balance, but there are no gradiations of that black mark and no specifics on what the problem may be? Don't want to buy into a problem.
I too find it hard to believe also, that anyone would intentionally waste time filling out a CR survey if they did not want to provide as accurate data as possible. It's really kind of a hassle. But I have no doubt it happens for one reason or another.
But based on years of looking at CR reliability charts for autos against long term reviews from various sources, Recall Notices, etc. I have found very few surprises. You will see when Fords improved, Toyota got worse, etc., etc., and IMO serves most consumers looking to see trends of various makes over the years. I am sure it's not perfect, but nothing else seems to do that as well.
I bought my 2012 RTL in November 2011. When I joined the ROC last year and posted pictures of my new truck, other members said that mine was the first reported 2012 on the forum. The dealer said that mine was the first 2012 they had delivered to a customer.
After one year, I have put 12000 miles on it. The fuel consumption has been slightly better than the Honda-published estimates. I had one instance of the "Check Fuel Cap" warning a few weeks ago. I removed the fuel cap and re-tightened it. The warning cleared the next time I started the truck.
I'm very happy with my Ridgeline so far and I have seen no other evidence of any problems related to the fuel system.
I can't see that all the 'stuffing' did Chrysler much good. lol
Sorry, I don't buy it. Must be only the Japanese automakers that are doing the stuffing, since they are always rated so highly, as expected.
Yes, the truth is that data from JD doesn't mesh well with other data. Is there something fundamentally "wrong" with it? No. But unless you're interested in how a car fares over the first 36 months of ownership, while it is under warranty, it lacks validity (it doesn't really measure what it purports to measure - or at least what the media portrays that it measures). That said, the fact remains that the CR data matches up very well with the data collected and maintained by the companies that sell extended warranties and the JD data does not. Perception has nothing to do with the pricing of the extended warranties - they are based on the frequency and cost of repairs. I personally don't believe the CR data set is any more contaminated than the JD data. I'd encourage folks to go to the JD site and CR site and compare for yourself the process used to obtain the data. BOTH use customer survey data largely obtained by mailers. Maritz also collects data, a very comprehensive survey in fact, but Im not sure who buys that data.
Strong initial quality of 2009 model-year vehicles which were produced during one of the most challenging years for the automotive industry has translated into historically high levels of vehicle dependability in 2012, according to the J.D. Power and Associates 2012 U.S. Vehicle Dependability StudySM (VDS).
The study, which is based on responses from more than 31,000 original owners of 2009 model-year vehicles after three years of ownership, measures problems experienced during the previous 12 months by those original owners. Overall dependability is determined by the level of problems experienced per 100 vehicles (PP100), with a lower score reflecting higher quality.
For these ratings, Consumer Reports looks at data from hundreds of thousands of car owners. It also looks at many different car models and reports separately on models with different engines or trim lines. It works independently from automakers, so it doesn't need to worry about losing advertising sales for giving a vehicle a poor rating. And it interprets its findings without having to report them back to automakers. It reports all the data it collects, not just the data it collects on the cars with the best results.
Unlike J.D. Power's survey based on the first 90 days of ownership and its Vehicle Dependability studies which track 3-year-old vehicles, the Consumer Reports survey asks for subscribers' opinions about their cars over the course of the last 12 months. Starting in 2006, the Consumer Reports survey began rating 10 model years for cars; in other words, it rates a particular model of car from the time when it's brand new to when that same model car is 10 years old. This helps to give a more complete picture of the car's performance throughout the life of the model.
Consumer Reports ranks cars using two methods: First, it sends out surveys to ask consumers about the overall performance of their vehicles. Second, it checks this data against its own tests, which are conducted on a test track.
Consumer Reports annually collects its data from a questionnaire that is mailed to subscribers of both its Web site and its magazine. In 2009, the survey was sent to 7 million subscribers, and 1.4 million responses were received [source: Consumer Reports]. The survey is created by a staff of social scientists who also consult with automotive engineers and statisticians at the Consumer Reviews' national research center.
The survey asks subscribers to report any problems they've had with their vehicles during the past 12 months. They're told to categorize problems by level of severity according to cost, safety, failure or time without a vehicle. Although subscribers also may report problems covered under warranty, they may not report damages or items under recall. Subscribers also are told not to report issues that are due to normal wear and tear, like issues related to batteries or brake pads, unless they failed much earlier than expected. This "reliability data" is updated annually.
After survey results are tabulated, engineers then test the vehicles on the Consumer Reports Test Track. They run a series of 50 performance tests to check against the data from the 1.4 million subscribers from whom they've collected data.
The main point is CR only mails out surveys to subscribers, including a return envelope. I don't recall clearly but it seemed like there is a CR subscription code associated with each survey.
So the only way you could 'stuff' results would be to make copies of the forms they send out, and provide additional envelopes as well for non-subscribers. Is CR going to accept those? Unlikely.
Imagine the chaos a laid-off or fired employee could create reporting this survey stuffing to CR and the media. Sorry, I'm not buying it, especially for the last decade. I did not participate in the CR surveys in the 70's and 80's.
It is a lot easier to get "dirty data" when the sample is 31,000 versus 1,400,000 - not saying things did or didn't happen... Just that with those number it'd be awful tough to skew the CR data given their current program. And it is t that much homework, it is just a copy-paste from a google search. Both JD and CR are quite up front and honest about the way they collect data.
Remember, Joe was talking about the 70's and 80's, personally I did not subscribe to CR and have no knowledge about how careful they were in collecting and preparing data at that time.
At the same token, I would not have put it past some auto makers to attempt such things when they were really feeling the heat from an increasing interest in imports, and being pressured to adopt disk brakes, 3 point seat belts, fuel injection, etc. etc.
I started subscribing to CR in the late 70's and now on line today. 'Having driven many company cars since 1977, I've had many different experiences than CR reported in their year-end reports. It never bothered me because someone(other than myself) paid for those cars.
I can say that I have never really paid much attention to those reports because they just never did match my experiences.
As Joe said, the individual in the field tests by CR were a lot more interesting to me. It's one of the reasons I'm a CR member today.
Good point. I would think that would be true of many of us, since they are reporting broad trends based on lots of users. It's notable that CR have reported steady improvements over the years in automobile reliability generally.
In any case, if a given car is rated "worse than average" in reliability in a certain area, some or even most of those cars may have no problem at all. The CR reliability report would only indicate that when compared to all cars, there were more reliability problems reported for the noted car than for the average.
As an extreme example, if every car owner surveyed for a given year, except one, had no reliability problems at all in a given category, but one had a car that had a problem, that car would, by definition, be worse than average. That owner is presumed to be representative of others who were not surveyed, but the point, in the extreme, is that a car with a 95% reliability rating would be worse than average if all others have a 98% reliability rating. I would think that plenty of people driving a car with a worse than average reliability rating might never experience a problem.
The true value in the reliability charts are 'trends'. Showing those makes that typically have more or fewer defects over time, becomes quite obvious.
It does not mean you will have every one of the problems indicated, or maybe not even one of them. But if you do have a problem, there is greater probability it will be one or more of those.
Trends that Japanese based vehicles (and more recently S. Korean) have fewer issues than domestics.
And among domestics, Ford products fare better, Chrysler products, worse.
Among Asian imports, Honda and Toyota fare the best, with other close behind, with a few exceptions.
You should check out True Delta, who does online auto surveys, very similar fashion in terms of reliability of vehicles, but the data collected, while similar to CR in content, is expressed quite differently. They also have subjective comments from owners on personal experiences.
I don't think any of these 'trends' are any big surprise in the real world.
In summary, you could say CR subscribers simply typify the experience of most owners across the US.
The true value in the reliability charts are 'trends'.
It does not mean you will have every one of the problems indicated, or maybe not even one of them. But if you do have a problem, there is greater probability it will be one or more of those.
In summary, you could say CR subscribers simply typify the experience of most owners across the US.
Yet another take on reliability - interesting at that - although not wholly accurate in it's depiction of CR.... but seems another way to skin the cat....
This is a really interesting article. Thanks for sharing it.
In checking into this further, this company is selling a product which you plug into your car if the check engine light comes on, to find out what the car's diagnostic system thinks the problem is. It then allows you to use some software to detail what the purported problem is and what it might cost. The video on their website is almost comical in that they refer to the "patented red, yellow, and green LED lights" on the device. What's important to understand from what I can tell is that if and when you get a check engine light, then use this device by plugging it into your car and then use it in the company's software to get a diagnosis, is that it reports this information to the company, which then uses it to create the data it is reporting to the media.
Problems I see in this, which may or many not be valid: What if you have a problem and no check engine light (e.g. front end alignment, body integrity issue like rust, rapid tire wear, door handle breaks)? Presumably there would be no check engine light and thus no report. What if the car's diagnosis of its problem(s) is wrong? It appears the company assumes it is right for purposes of reliability and that the costs of repair the company estimates are accurate. What if your check engine light never comes on (even if you have major problems)? If a car model has a faulty check engine light system that never works, would it be considered ultra-reliable? Finally, are the buyers of these devices an adequate cross-section of the population and is it an adequate sample for good data.
Bottom line here is that this company is endeavoring to sell a device. They are reporting on reports from the devices they sell. Whether those devices are any good is unknown. I assume a mechanic uses a similar device to try to figure out the problem with a car with a check engine light, but I would also assume that sometimes the car's report on its own problems may not be the real culprit.
Most people are motivated by instant satisfaction. While they may like the concept of playing a part in helping all consumers, the more immediate concern is what do I get out of it?
Plus, as mentioned, all the defects that will not be caught by the monitor.
I just purchased a 2013 RTL and have noted the gas gauge/range left to empty and how much fuel goes in are a little off.
I have run it down fairly low and the range calculator says 50 miles to empty which one would think would work out to about 2 or gallons left. Only have been able to get 17.5 gallons in. Dealer says no known problems.
Sales guy talked to a mechanic who theorizes that they are tuned/designed to use 87 and running 89 octane burns hotter and messes with the calculations.
Sales guy talked to a mechanic who theorizes that they are tuned/designed to use 87 and running 89 octane burns hotter and messes with the calculations.
Jhahawk - is the low fuel light going on when you run it down? I think there's about 2.5 gallons or so left when the light goes on. As for the octane rating impacting computer accuracy, I don't think the octane value would have an impact on the calculation of fuel used.
If the newer models are like the 2006-2008 models without the trip computer, the LF light comes on when there's about 3.3 gallons remaining until the needle hits E, and then a small (undefined) reserve remains. I estimate there's about 4 gallons remaining when the LF light comes on. At 15 mpg, that's 60 miles.
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Related Threads
?
?
?
?
?
Honda Ridgeline Owners Club Forums
1.3M posts
86.7K members
Since 2004
Honda Ridgeline Owners Club, forum community to discuss reviews, accessories, performance, care, mods, and more.