Honda Ridgeline Owners Club Forums banner
101 - 113 of 113 Posts
Again, the JD Power ratings are also based on surveys, and they might also not be representative of the general population. JD Power doesn't say how it selects car owners to survey, but it might be even less of a representative sample than CR.

And CR ratings are based on percentages, so that essentially does adjust for sales volume. And they claim to have "the most comprehensive reliability information available to consumers." Since we don't know how many survey responses JD Power or anyone else gets, that might be true.

But I found a couple of other interesting notes about CR's methodology:

"Consumer Reports members reported on problems they had with their vehicles during the past 12 months that they considered serious because of cost, failure, safety, or downtime in any of the trouble spots included in the table below.

The scores in the charts are based on the percentage of respondents who reported problems experienced from among 20 trouble spots. Because high-mileage cars tend to encounter more problems than low-mileage cars, problem rates were standardized to minimize differences due to mileage. We adjust for the vehicle owner’s age, based on our findings that older owners are more likely to report fewer problems."
With all due respect, there are surveys and then there are statistically valid surveys. Of course JD Power uses surveys, but they are statistically valid surveys. None of CR's surveys or product comparisons are statistically valid. Yes, they collect data but do you notice that their responses " are based on the percentage of respondents who reported problems experienced" is not normalized for vehicle sales, location, how the vehicle is used, etc.?

Ask you local college statistics professor. Asking subscribers to voluntarily report their experience with their car immediately skews the data sample. It has been proven over the last 60+ years that voluntary surveys cast out to a general population group will get more complaints than compliments.

JD Power is a world renown statistical QA company that goes out of its way to create statistically valid surveys.
CR's Annual Reliability Survey is not statistically valid because it does not adjust its survey population for age, geographic region, income, sex, etc. Nor does the Annual Reliability Guide adjust model reported for sales volume, time on market, etc. Have you ever read their own website?

"Consumer Reports’ expert team of statisticians and automotive engineers used the survey data to predict reliability of new models. Predicted reliability is Consumer Reports’ forecast of how well models currently on sale are likely to hold up." (emphasis added).
So, CR's reliability ratings for vehicles is not an actual study of reliability like the JD Power's study. Consumer Reports looks at problems with vehicles over the past few years for any particular model, then using that voluntarily submitted data from CR subscribers, , or in other words 2010, 2011, and 2012 models. This is the basis for their predictions of 2013 models.
CR's monthly issue section reporting CR's QA evaluators comparison of equivalent models is much better as each vehicle is evaluated against a standardized CU-generated list of criteria. The Annual New Car Guide is a compilation of these monthly comparison reports, making the Annual New Car Guide a valid evaluation tool.

Last, CR's Annual Reliability survey is an online survey subject to error. All onlinew surveys should be viewed skeptically. See, e.g., The Unreliability of Online Review Mechanisms.

Here are a couple discussion as to why CR's Annual Reliability survey (featured every July in the monthly magazine) is unreliable:


 
With all due respect, I don't trust J.D. Powers' survey data, either.

Two recent examples of why:

  • How did Dodge go from first place in 2023 to last place 2024 on initial quality? (Not to mention how they got first place to begin with.)
  • How can Chevrolet and GMC be so far apart in dependability when they roll down the same assembly lines using the same parts?

Hmmm. :)
 
With all due respect, I don't trust J.D. Powers' survey data, either.

Two recent examples of why:

  • How did Dodge go from first place in 2023 to last place 2024 on initial quality? (Not to mention how they got first place to begin with.)
  • How can Chevrolet and GMC be so far apart in dependability when they roll down the same assembly lines using the same parts?

Hmmm. :)
It depends on what particular survey you're looking at.

For example, you don't state what Dodge model was No. 1 in 2023 or if the same Dodge model went to last place in 2024. Are you looking at the same year of a particular model? Are you comparing New Car Reliability to 3-Year ownership Quality and reliability report? Are you comparing the same year and equivalent model when comparing GM vs Chevy models?

It's impossible for anyone to answer without providing the specific data you are mentioning.

As for an excellent example of how CR's reports on similar models are screwed up, look no further than the classic example used in statistics class: the Toyota Matrix compared to the Pontiac Vibe.
Both models rolled off the assembly line at the NUMMI plant in Fremont, CA. Although the Vibe and Matrix are similar designs and shared many components, the two brands actually used several different brand-specific components for their heating and air conditioning systems. These components include the air conditioning compressor and related hoses, the heater hoses, the heater core, and the serpentine belt. Other than that, they were essentially the same car. But, CR's Annual Reliability ratings found subscribers hating their Vibes but loving their Matrix.

Why all the complaints about the Vibe but not the Matrix.

It wasn't the vehicle -- the cause of the Vibe's down-rating was the dealer experience. Toyota dealers treated their customers complaints by solving them; Pontiac owners were given the usual American car dealer 2-step brush-off.
 
See bolded part.



Also see my post on another forum...

JD only says it is a survey "of ownership in that period".. Maybe the Dodge Hornet is THAT bad. :) (the only real ownership numbers are in 2024, right?)

...Or some sort of model year production fulfillment anomaly from 2023, hence why the Buick is in 2024. They don’t detail their measurements that well.
 
With all due respect, I don't trust J.D. Powers' survey data, either.

Two recent examples of why:

  • How did Dodge go from first place in 2023 to last place 2024 on initial quality? (Not to mention how they got first place to begin with.)
  • How can Chevrolet and GMC be so far apart in dependability when they roll down the same assembly lines using the same parts?

Hmmm. :)
It's even more odd when you consider that right now they are using data for 2023 vehicles to rate 2024 vehicles, which they do until they update for 2024 in September. And they claim that for models not redesigned the data is an accurate reflection of the following year performance, which isn't necessarily true. Some fixes are made when there are known problems, but they aren't redesigns. It would be better to just let users decide if they want to knowingly rely on 2023 data. I know CR will make predictions of expected reliability, but they don't do it just by using the previous data on that vehicle. They consider history, but also changes and reliability of vehicles with the same or similar components, etc.

From JD Power "Ratings Methodology" page (also doesn't explain any methodology that would make their ratings any better than CRs):

"For current model year data, prior year information is used until all data is updated in September each year. For example, 2023 model year vehicle ratings are based on model year 2022 data until the September update of 2023. For models that are not redesigned, this “carryover” data is an accurate reflection of the following year performance."

 
IMO these surveys are just one input to look at when deciding on a vehicle you want to purchase. Lots of people still buy the bottom rated cars. People make a decision GM says within 3 minutes of sitting in a car at a dealership. That's not objective survey or anything buying. That's a tug at the heart strings only. My RTL purchase was made over a period of months. I tried a couple out while on the road in different cities. Tried a few of the other competition out too over those months while out of town travelling. Looked at a few new ones in a showroom. Had test drives in Fords and GMC's and RAMs. Looked at surveys and the usual BS online. But for me the seal of the deal was the test drive of a Ridgeline while in NJ. It was what I wanted compared to the others. I did like the others. It was just that the RTL fit exactly what I wanted. Less Radio BS, less driving aid bs, less trouble with no sun roof bs, enough hp to satisfy, a bed instead of a SUV but rode like all my SUV's previously, and the price was ok. And I still have it for those reasons. All original parts and never been to a dealership for repairs is a side benefit. I do go out of my way to ensure that too by testing like I have. If it starts to go south with a bad oil report card it'll be history long before something blows. That's what testing gives you like a extended warranty does too. (which I bought into as well) Only two more years till 10. Then well, we'll see.
 
Thanks for the links J P. They contained some interesting information, but also some ridiculous and even wrong info. The first one doesn't really address the issue of accuracy of CRs reliability surveys vs. anyone else's at all. And it incorrectly claims that the CR rating is affected equally by a blown engine or a broken ashtray. CR asks about problems that the respondent considers serious due to cost, lost time, etc., so respondents don't necessarily list every item they have a problem with. And the complaint that CR doesn't report actual reliability based on long-term ownership and use of the cars, or preferably 10 of each model. Well, neither does JD Power.

Also,JD Power sending out random surveys doesn't fix the problem of people being more likely to respond if they are unhappy and want to complain about their vehicle. I also haven't seen any info that JD Power normalizes or adjusts it's survey info based on the factors you listed, and even if it did that adjustment would likely be somewhat subjective. But CR does do some normalization too, as I mentioned above. I don't see support for the assertion that JD Power surveys are statistically valid while CR's aren't.

I did find an article which says JD Power claims (sorry for the poor grammar, it's in the original quote): “J.D. Power is one of the only sources of consumer ratings based on independent and unbiased feedback from a representative sample of verified product owners.” This does bring to mind the question of what are the other sources?

I also learned that JD Power probably asks a lot more questions than CR, but I haven't seen them. That could lead to good, more detailed info, or to more complaints about minor issues, and it could make people even less likely to take the time to do the survey unless they want to complain.


 
And personally my opinion of both of those survey systems are lousy data in and lousy data out. Until all manufacturers report their info to them then we'll never know the real failure rates of these things. And will rely on guessing and questionable data from "polling" like Politics.
 
Internal warranty data from the OEM would reveal a more complete story.....but we will never see that.

Although flawed...........the various survey sources we have been talking about do provide some insight as to trouble spots in particular vehicles and vehicles that are sub par overall.

I hope we can all agree that Toyotas are much more reliable than a VW or a Chrysler.

..........and if you flip through the annual CR charts that used to be colored with green and red circles......some entire pages were bleeding red while other pages had a nice green glow.

You could also pick out a problem child in an otherwise good brand.
 
Internal warranty data from the OEM would reveal a more complete story.....but we will never see that.

Although flawed...........the various survey sources we have been talking about do provide some insight as to trouble spots in particular vehicles and vehicles that are sub par overall.

I hope we can all agree that Toyotas are much more reliable than a VW or a Chrysler.

..........and if you flip through the annual CR charts that used to be colored with green and red circles......some entire pages were bleeding red while other pages had a nice green glow.

You could also pick out a problem child in an otherwise good brand.
Yes, I'll continue to use CR, and probably look at JD Power too. I was never very concerned about the initial quality surveys I'd frequently see from JD Power, but only within the last year did I learn about the 3-year dependability surveys.

And while reliability is an important factor, it is never the only factor that's important to me.
 
See bolded part.



Also see my post on another forum...

You apparently didn't read the comment by bwilson4web to your linked post. In 2024, JD Power changed their survey to incorporate "Voice of the Customer (VOC) data to create a more expansive metric for problems per 100 vehicles (PP100) [for] the J.D. Power 2024 U.S. Initial Quality Study (IQS)" See https://www.jdpower.com/business/press-releases/2024-us-initial-quality-study-iqs.

As a former Chrysler fan from the 1960s forward to the 1990s, including the "customer experience" in the Initial Quality Survey more correctly reflects the piss poor quality of the average Stellantis (formerly Chrysler-Fiat) product quality. That's why the rating for Dodge tanked -- while the product coming off the assembly line met or exceeded the older JD Power criteria that measured fit-and-finish, and initially quality criteria, the customer experience with later developed defective design and materials exposed the flaws in the initial quality reports. {I've had to educate my 30-something mechanic on the former high quality engineering that was Chrysler Corporation 'back in the day."]

[Think VW with their diesel emissions "cheating." Their diesels passed initial quality testing, but in the real world ...]

Kia and their sister manufacturer, Hyundai, ratings went up '24 because (finally!) their quality and trouble-free ownership customers experience was measured.

Why are Chevrolet and Buick near the top while GMC and Cadillac are below the industry average?
Because while they may roll off the same assembly line, they do not necessarily use 100% of the same parts. Granted, engines, unibodies, frames, body panels, infotainment systems, etc. may be the same, but other parts are not. For example, certain model GMC trucks use different springs and shocks than Chevy models. Other times, differences in quality depend on identically numbered parts coming from different suppliers. My friend and I at one time had the same model 1500 pickup, but he had the GMC Sierra model and I had the Chevy Silverado. We helped each other out with maintenance and repairs, and we quickly found out that not all parts were interchangeable -- or necessarily that a part number produced the same OEM part. [We had a heck of a time once where we had to order and re-order the same part number from different sources because the common part that was associated with that part number didn't match what was installed.]

Why is Ford in the upper third of the list and Lincoln is in the lower third?
Ah, this one I can answer -- Lincoln models had more electronic dodad-ladended infotainment systems installed in them than the Ford versions, and the Lincoln versions universally sucked compared to teh less feature-ladened Ford versions. Again, the 2024 JD Power ratings reflected consumer inputs that helped keep Lincoln near teh bottom. [Althought I like Lincoln and if you're like me, who doesn't bother with anything more syophacated than Bluetooth for my phone, used Lincoln's are IMHO a steal.]

Here's a question for you -- why are Volvo's so low on both CR and JD Power reports? UntiI very recently, I owned a 2006 Volvo XC-90 2.5L Turbo AWD. I put over 200,000 miles on it before selling it to a friend (who has put another 30,000 on it). In the entire time I owned it, the only part failure was the AC compressor. I did the manufacturer's (not the dealer's) scheduled maintenance. The clear coat is finally starting to fail and some of the plastic parts are breaking (necessitating trips to junk yards) but it still runs like a top (even the AC).

My purcahse at the time got 3 more people on my street to buy Volvos -- two more XC-90s and one XC70 station wagon. none of us experienced more than a single OEM component failure.

So, why is that manufacturer so poorly rated? Good question! It wasn't highly rated at the time I bought it -- by CR or JD Power, but I did my own research and decided to buy anyway. IMHO, I made a good purchase decision.

Look -- I undertand you have a problem with JD Power reports. OK, fine -- this isn't a oneupmanship contest. Re-read what I wrote before:

1. The CR Annual New Car Guide is fairly reliable.
2. The monthly CR vehicle reviews are fairly reliable.
3. The CR Annual Reliability Guide is not reliable because of the non-valid data collection methods used.
4. CR reviews emphasize cost-effectiveness, which is a subjective measure since CR's evaluations aggregate or estimate what a particular consumer wants and needs.
5. JD Power reports are more statistically rigorous, and (as the 2024 change in research measues indicates) constantly revised to more correctly reflect market realities.
6. Both JD Power reports, combined with CR's Annual New Car Guide and CR's monthly vehicle reviews should provide the average non-car fanatic comsumer with considerable data to make an informed purchase.
 
It wasn't the vehicle -- the cause of the Vibe's down-rating was the dealer experience. Toyota dealers treated their customers complaints by solving them; Pontiac owners were given the usual American car dealer 2-step brush-off.
Very possible - although bad dealers seem more random than bad car models. As I mentioned earlier, there is an expectation of results when some people buy a Toyota or Subaru based on CR data - the same way they buy a new fridge or TV. If you buy a car expecting it to be reliable and then it isn't you may be privately upset but publicly less likely to admit your "error". It even has a name - Confirmation Bias

My S-in-law has had four Subaru Foresters, which she loves. She says they are very reliable, but I know she has had head gaskets replaced on two different cars (at her expense), plus some very expensive brake work (+$2K). In ten years (90K miles), I spent less than $300 on our runaround PT Cruiser, which CR had as a "Don't Buy"

CR never addresses the situations you highlight - Audi and VWs are often 90% the same parts as are many other brands (although VW and Audi seem to have the widest CR ratings divergence I think). And around here, they are sold by the same dealers. So that just leaves the owners, and their expectations (or actions)
 
101 - 113 of 113 Posts