Patterns and Correlates of Patient Referral to Surgeons for Treatment of Breast Cancer
Characteristics of surgeons and their hospitals have been associated with cancer treatments and outcomes. However, little is known about factors that are associated with referral pathways.
We analyzed tumor registry and survey data from women with breast cancer diagnosed in 2002 and reported to the Detroit and Los Angeles Surveillance, Epidemiology, and End Results registries (n = 1,844; response rate, 77.4%) and their attending surgeons (n = 365; response rate 80.0%).
About half of the patients (54.3%) reported that they were referred to the surgeon by another provider or health plan; 20.3% reported that they selected the surgeon; and 21.9% reported that they both were referred and were involved in selecting the surgeon. Patients who selected the surgeon based on reputation were more likely to have received treatment from a high-volume surgeon (adjusted odds ratio [OR], 2.2; 95% CI, 1.5 to 3.4) and more likely to have been treated in an American College of Surgeons–approved cancer program or a National Cancer Institute (NCI) –designated cancer center (adjusted OR, 2.0; 95% CI, 1.3 to 3.1; adjusted OR, 3.4; 95% CI, 1.9 to 6.2, respectively). Patients who were referred to the surgeon were less likely to be treated in an NCI-designated cancer center (adjusted OR, 0.5; 95% CI, 0.3 to 0.9).
Women with breast cancer who actively participate in the surgeon selection process are more likely to be treated by more experienced surgeons and in hospitals with cancer programs. Patients should be aware that provider or health plan–based referral may not connect them with the most experienced surgeon or comprehensive practice setting in their community.
Characteristics of surgeons and their institutions have been associated with processes and outcomes of cancer care.1 Surgeon procedure volume and subspecialty training have been associated with treatment patterns, morbidity, and mortality.2-6 Hospital procedure volume and cancer center designation have been associated with patterns of treatment and health outcomes.5,7,8 Relationships between clinician and hospital characteristics and patterns of treatment have been examined for women with breast cancer. Surgeon volume has been associated with patterns of local therapy such as definitive surgery, radiation after breast-conserving therapy, and breast reconstruction after mastectomy.9-15 Thus, breast cancer patients with similar clinical characteristics may receive different treatment depending on the type of surgeon who provides their care and the institution where they receive treatment.
These observations have motivated recommendations to concentrate the initial care of cancer to providers with the most experience.5 One strategy that has been evoked is to guide patients to surgeons and centers with the best quality record.7 This strategy would provide information about practice-related attributes of clinicians or institutions to encourage referral to the highest quality providers. There are, however, challenges to this strategy. First, knowledge about the link between the structure of cancer care delivery and patient outcomes is not sufficient to distinguish accurately between clinical practices based on quality of care. Second, profiling individual clinicians is limited by statistical challenges, and the logistics and cost of data collection and dissemination. Finally, it may not be feasible or desirable to reshape referral relationships among the large number of specialists who evaluate and treat patients with breast cancer. However, there is little information about patterns and correlates of patient referral to providers and their institutions. To address this issue we used survey data from a large population-based sample of patients recently diagnosed with breast cancer and their attending surgeons to answer the following research questions. First, how are surgeons who treat newly diagnosed patients with breast cancer selected? Second, what is the association between how surgeons are selected and characteristics of the surgeon (breast cancer surgery volume) and hospital (cancer program status)?
Details of the sampling and data collection procedures for the patient and surgeon surveys have been published elsewhere.9,15 We surveyed women age 79 years and younger diagnosed with ductal carcinoma in situ (DCIS) and invasive breast cancer, identified by the Surveillance, Epidemiology, and End Results (SEER) registries of the greater metropolitan areas of Detroit and Los Angeles between December 2001 and January 2003. We selected all patients with DCIS and a random sample of invasive cases (oversampling African American women) each month into the preliminary study sample (N = 2,647). The survey was completed by 77.4% of eligible women (n = 1,844). SEER data were merged with survey data for 98.2% of patients. Pathology reports were used to identify surgeons (n = 456) for 98.5% of the patient sample. Surgeons subsequently were mailed a questionnaire and cash gift. The surgeon response rate was 80.0% (n = 365).
The patient and surgeon data were merged using primarily information from pathology reports. Most respondent patients (94.6%) had exactly one surgeon identified from the pathology reports. Patient report of the name of their attending surgeon compared with the surgeon identified from their pathology reports showed high agreement in a convenience sample of 908 respondent patients (99% in Detroit and 90% in Los Angeles). Patient report was used to merge patient and surgeon respondents in cases where pathology information either failed to identify a respondent surgeon (2.8%) or identified two respondent surgeons (2.6%). The final merged data set contained complete patient-surgeon dyad information for 64.6% of accrued and eligible patients (n = 1,539) and 69.7% of accrued surgeons (n = 318). Patients excluded from the final study sample were less likely to be white (62.2% v 72.6%; P < .001) and were more likely to have received a mastectomy (34.7% v 30.0%; P = .021). Surgeons who were excluded from the study sample were less likely to be a high-volume surgeon (20.6% v 32.0%; P = .011) but there were no differences in age, sex, or years in practice.
The dependent variables were measures of surgeon procedure volume and hospital cancer program status. Surgeon breast cancer procedure volume was based on surgeon report of the percentage of their total practice devoted to breast cancer–related procedures. Responses were collapsed into three categories: less than 25%, 25% to 50%, more than 50%. These variable categories were highly associated with the number of cases of breast cancer reported to the two SEER registries for each respondent surgeon in 2002 (mean, 44, 67, and 124, respectively). Hospital cancer program status was determined using information from the Cancer Program of the American College of Surgeons (ACoS).16 ACoS cancer program approval is based on a periodic onsite evaluation that considers the comprehensiveness of services, care coordination, patient support programs, and monitoring and improvement of care. Information was gathered for the 114 hospitals where one or more patients in the study sample were treated: National Cancer Institute (NCI) –designated comprehensive cancer center (n = 4), other ACoS approved cancer program (n = 35), and no ACoS-approved cancer program (n = 75).
The primary independent variable was patient self-report of how the surgeon was selected. Patients were asked: “Which of the following statements describe how the surgeon who performed your breast surgery was selected?” Patients could select all that applied from the following items: (1) The surgeon was one of the only surgeons available through my health care plan; (2) I was referred to the surgeon by another doctor; (3) I chose this surgeon because of his/her reputation; (4) This surgeon was recommended to me by a relative or friend; (5) I chose this surgeon because I wanted to be treated at the medical institution where he/she worked; (6) I wanted a surgeon who practiced near my home; or (7) I chose this surgeon because of some other reason. We created three variables based on these responses. Patients were categorized as referred to their surgeon if they endorsed either item (1) or (2); selected their surgeon based on reputation if they endorsed any of the items (3), (4), or (5); and selected based on proximity if they chose item (6).
Additional patient variables included tumor behavior, patient age, race/ethnicity, education, income, the number of surgeons consulted before surgery, and geographic site. Additional surgeon variables included sex and the number of years in practice since completing training. Hospital breast cancer surgery volume was defined as the number of breast cancer surgeries performed in 2002 in each treating hospital and reported to the registries.
We examined the distribution of how surgeons were selected by patient characteristics. We then used logistic regression to examine the independent association of covariates (tumor behavior, patient age, race, income, education, geographic site, whether the patient was referred to the surgeon, and whether the patient selected the surgeon) with surgeon volume specified as a dichotomous variable (1 > 50%; 0 = 50% or less). Variables indicating surgeon sex, years in practice, and hospital cancer program status were also included in this model.
We then used multinomial logistic regression to examine the association of independent variables with hospital cancer program status (NCI-designated center, other ACoS cancer program, and no cancer program as the baseline category). Surgeon volume and hospital breast cancer surgery volume were included in this model. Wald tests were used to test for differences for group variables. All second-order interactions were evaluated and none were statistically significant. Data were weighted to account for the sampling design and nonresponse. Coefficient SEs were calculated to account for patient clustering within surgeons.17,18
Table 1 shows the characteristics of the study population. About one fifth of patients had DCIS on initial pathology reports. The mean age was 59.7 years. Two thirds were white, whereas 18.7% were African American. Almost two thirds of the patients had at least some college education and 18.2% reported incomes below $20,000. Patients were about equally divided between the two geographic sites. About one third of patients reported that they consulted two or more surgeons before surgery. About one third of patients were treated by a high-volume surgeon, defined as having more than 50% of total practice devoted to breast cancer surgery. About one third of patients were treated by a female surgeon. On average, patients were treated by attending surgeons who reported that they had been in practice 16 years after completing training. About one third of patients were treated in a hospital without an ACoS-approved cancer program or NCI-designated cancer center. The median number of definitive breast cancer surgeries performed in the treating hospitals was 166.
Figure 1 shows the distribution of patient responses regarding how the attending surgeon was selected. Percentages sum to greater than 100% because respondents could select more than one category. Nearly two thirds of patients reported that they were referred to their surgeon by another doctor; 14.9% reported that they were referred by their health plan; about one fourth reported that they chose the surgeon based on reputation; 15.3% selected their surgeon based on the institution; 12.9% chose the surgeon based on the recommendation of family or friends, and 8.7% chose a surgeon based on proximity. When we combined responses we found that 54.3% reported that they were referred but did not select their surgeon; one fifth (21.9%) reported that they were referred and they selected their surgeon; one fifth (20.3%) reported that they selected their surgeon but were not referred by a provider or plan. The remaining patients (4.9%) reported that they had a prior relationship with their surgeon primarily through a previous surgery. Selection of their surgeon based on reputation was more frequently reported by white patients (36.5% v 26.5%; P < .001), more highly educated patients (40.0%, 38.7%, 24.4%, 21.6%, respectively, for highest to lowest education categories; P < .001), patients with higher incomes (54.4%, 43.8%, 35.4%, and 35.4%, respectively, for highest to lowest income categories; P < .001), and patients who consulted with two or more surgeons before surgery (43.5% v 28.5%; P < .001). There was no association with age, tumor behavior, and geographic site. Patient report of selecting the surgeon based on proximity or of being referred to the surgeon was not associated with patient clinical or demographic factors.
Table 2 lists adjusted odds ratios (aORs) for receipt of surgical treatment by a high-volume surgeon controlling for other factors. Patients who selected their surgeon based on reputation were more likely to have received treatment from a high-volume surgeon (aOR, 2.2; 95% CI, 1.5 to 3.4). There was no independent association between being treated by a high-volume surgeon and the other selection categories. The aOR for being treated by a high volume surgeon was 0.4 (95% CI, 0.2 to 0.8) and 1.0 (95% CI, 0.6 to 1.7) for patients who selected their surgeon based on proximity and those who were referred by a doctor or health plan, respectively. The positive association between patients who selected their surgeon by reputation and treatment by high-volume surgeons was observed after controlling for patient factors and hospital factors (breast cancer surgery volume and cancer program status), and there were no significant interactions.
Table 3 shows adjusted relative rate ratios for receiving surgical treatment in hospitals with different cancer programs, controlling for other factors including surgeon procedure volume and hospital surgery volume. The second column shows adjusted relative rate ratios for receipt of surgical treatment in NCI-designated comprehensive cancer centers versus hospitals without an NCI-designated cancer center or ACoS-approved cancer program. The third column shows relative rate ratios for receipt of surgical treatment in hospitals with an ACoS-approved cancer program versus hospitals without NCI-designated cancer center or ACoS-approved cancer program. Patients who selected their surgeon based on reputation were more likely to receive treatment in NCI-designated cancer centers (aOR, 3.4; 95% CI, 1.9 to 6.2) or other ACoS-approved programs (aOR, 2.0; 95% CI, 1.3 to 3.1) than in hospitals without approved cancer programs. Patients who were referred to their surgeon were less likely to be treated in a hospital with an NCI-designated cancer center (aOR, 0.5; 95% CI, 0.3 to 0.9) versus hospitals without approved cancer programs.
In this population-based study, we found that about one third of patients were treated by surgeons who devoted more than 50% of their practice to breast cancer, and approximately two thirds of patients were treated in an NCI-designated cancer center or ACoS-approved cancer program. About three fourths of patients reported that they were referred to their surgeon by a provider or health plan, whereas 42% reported that they selected their surgeon. There was considerable overlap between these categories: about one fifth of patients reported that they were both referred to and selected their surgeon.
Patients who reported that they selected their surgeon based on reputation were more likely to be treated by a high-volume breast surgeon after controlling for other factors including cancer program status and hospital breast cancer surgery volume. These findings suggest that patients select high-volume breast surgeons independent of the practice setting.
Patients who selected their surgeon based on reputation were more likely to have seen two or more surgeons before surgery. This likely reflects one pathway to high-volume surgeons: patients seeking a second opinion. Patients who selected their surgeon based on reputation were also more likely to be treated in a hospital with an NCI-designated cancer center or ACoS-approved cancer program after controlling for other factors including individual surgeon procedure volume and hospital surgery volume. This suggests that some patients select their surgeons based on the reputation of their clinician's institution.
Patient report of being referred to the surgeon by other providers or health plans was not associated with whether a patient was treated by a high-volume surgeon. However, patients who reported provider or health plan–based referral were less likely to be treated in an NCI-designated cancer center after controlling for other factors. This was consistent across geographic sites.
Several limitations of the study merit comment. Breast surgery volume was determined by surgeon self-report of the percentage of their total practice devoted to breast cancer. This variable was highly associated with the number of breast surgeries performed by individual respondent surgeons and reported to the two participating SEER registries in 2002. However, it may lack some accuracy. Lack of detail about health insurance may have introduced unmeasured confounding because some health plans limit patient referral to contracted provider networks. However, the option “the surgeon was one of the only surgeons available through my health care plan” was listed among reasons why the treating surgeon was selected and was endorsed by 14.9% of respondents. Finally, although the sample size was large and patient and surgeon response rates were high, differences in respondent characteristics between those included versus excluded from the analyses may limit generalizability.
Our results have important implications for patient care and policy. Physician experience and hospital cancer program status have been associated with different treatment experiences for patients with breast cancer.1-15,19,20 These findings have motivated recommendations to concentrate the first course of treatment for cancer by guiding patients to the providers with the best quality record. Our findings suggest that women who were more actively involved in selecting their surgeon were more likely to be treated by surgeons more experienced in breast surgery and in more comprehensive treatment settings. In contrast, the provider-based referral pathway was not associated with surgeon volume.
We can only speculate about why patients who reported being referred to their surgeon were less likely to receive surgery in NCI-designated cancer centers. Virtually all patients with newly diagnosed breast cancer consult with a surgeon after a problem is identified through an abnormal mammogram or detection of a breast mass suggestive of cancer. The referral patterns of surgeons may be based largely on organizational and provider relationship factors. Organizational factors such as restricted provider networks within health maintenance organizations or preferred provider organizations may play an important role. Informal professional and social network factors may also play an important role. Most surgeons who perform breast surgery are general surgeons with diverse clinical practices. Patient referral to these surgeons may reflect provider relationships built on general surgical practice availability and performance rather than surgeon's specific expertise in treatment of breast cancer. Given the limited literature linking the structure of cancer care delivery to patient outcomes, community physicians may not be convinced that targeting referral to high-volume surgeons or hospitals with cancer programs would add clinically important value for most newly diagnosed patients with breast cancer seen in their practice.
Additional referral barriers may partly explain the inverse association observed in this study between provider-based referral and treatment in an NCI-designated cancer center. Community clinicians may perceive disadvantages of referral to NCI-designated cancer centers because of concerns about losing patients, concerns about continuity and coordination between themselves and other providers, or concerns that their patients may face difficulties when navigating different systems of care. Given the large number of surgeons who care for patients with breast cancer and the myriad of potential factors shaping current patient referral patterns, it may be difficult to concentrate breast cancer surgical care by influencing referral practices.
More research is needed to address the quality implication of the referral patterns identified in this study. This research should address the relationship between provider characteristics and key delivery system factors such as patient-provider communication, provider-provider communication, patient decision and care support, and practice management initiatives. Ultimately, these factors should be linked to outcomes such as use of effective treatments, patient satisfaction, and quality of life. In the meantime, women with breast cancer should be aware that provider-based referral might not connect them with the most experienced surgeons or the most comprehensive practice setting in their community. Patients might consider a second opinion, especially if they are advised to undergo a particular procedure without a full discussion of treatment options or a clear medical rationale for the recommendation.
The authors indicated no potential conflicts of interest.
Conception and design: Steven J. Katz, Timothy P. Hofer, Monica Morrow
Financial support: Steven J. Katz
Administrative support: Steven J. Katz, Kendra Schwartz, Lihua Liu, Dennis Deapen
Provision of study materials or patients: Steven J. Katz, Kendra Schwartz, Lihua Liu, Dennis Deapen
Collection and assembly of data: Steven J. Katz, Paula M. Lantz, Nancy K. Janz, Kendra Schwartz, Lihua Liu, Dennis Deapen
Data analysis and interpretation: Steven J. Katz, Timothy P. Hofer, Sarah Hawley, Paula M. Lantz, Nancy K. Janz, Monica Morrow
Manuscript writing: Steven J. Katz, Timothy P. Hofer, Sarah Hawley, Paula M. Lantz, Nancy K. Janz, Monica Morrow
Final approval of manuscript: Steven J. Katz, Timothy P. Hofer, Sarah Hawley, Paula M. Lantz, Nancy K. Janz, Kendra Schwartz, Lihua Liu, Dennis Deapen, Monica Morrow
|Mean age, years||59.7|
|No. of surgeons consulted prior to surgery|
|3 or more||6.6|
|Individual surgeon breast cancer volume*|
|No. of years postsurgical training, mean||16|
|Hospital cancer program status|
|No approved cancer program||34.6|
|Other ACoS-approved cancer program||54.3|
|Hospital breast cancer surgery volume in 2002|
|10th to 90th percentile||57-250|
|Patient selected surgeon based on reputation||33.4|
|Patient selected surgeon based on proximity||8.7|
|Patient referred to surgeon||75.0|
|Patient had prior experience with surgeon||4.9|
NOTE. Figures are weighted.
Abbreviations: DCIS, ductal carcinoma in situ; HS, high school; NCI, National Cancer Institute; ACoS, American College of Surgeons.
*Percentage of total practice devoted to breast cancer.
|Variable||Odds Ratio||95% CI||Wald Test||P|
|Invasive disease||1.2||0.7 to 1.7|
|Age||0.98||0.96 to 0.99|
|Black||0.9||0.4 to 2.0|
|Other||1.4||0.7 to 2.4|
|HS graduate||1.4||0.7 to 2.6|
|Some college||1.4||0.7 to 2.8|
|College graduate||2.1||1.0 to 4.4|
|< $50,000||0.7||0.4 to 1.3|
|< $90,000||0.7||0.4 to 1.4|
|≥ $90,000||0.9||0.4 to 2.0|
|Unknown||0.7||0.4 to 1.4|
|Detroit v Los Angeles||1.6||0.6 to 4.0|
|Female surgeon||28.5||8.3 to 97.6|
|Years in practice after training||1.1||1.0 to 1.2|
|Hospital cancer program||6.5||.040|
|No approved program||1.0|
|NCI-designated center||5.2||1.2 to 22.6|
|Other ACoS-approved cancer program||1.8||0.8 to 4.2|
|Patient selected surgeon based on reputation||2.2||1.5 to 3.4|
|Patient selected surgeon based on proximity||0.4||0.2 to 0.8|
|Patient referred to surgeon||1.0||0.6 to 1.7|
NOTE. High-volume surgeon is defined as > 50% of total practice devoted to breast cancer–related surgery.
Abbreviations: DCIS, ductal carcinoma in situ; HS, high school; NCI, National Cancer Institute; ACoS, American College of Surgeons.
|Variable||NCI-Designated Cancer Center v No Approved Cancer Program||Other ACoS-Approved Cancer Program v No Approved Cancer Program|
|Odds Ratio||95% CI||Wald Test||P||Odds Ratio||95% CI||Wald Test||P|
|Invasive disease||1.2||0.8 to 1.9||0.9||0.7 to 1.2|
|Age||1.0||0.9 to 1.0||1.0||0.9 to 1.0|
|Black||1.6||0.7 to 3.7||0.5||0.2 to 0.9|
|Other||1.9||0.7 to 4.7||0.9||0.5 to 1.7|
|HS graduate||1.3||0.6 to 2.4||1.5||1.5 to 4.1|
|Some college||1.0||0.5 to 2.3||1.1||0.6 to 1.9|
|College graduate||1.3||0.8 to 3.9||1.3||0.6 to 2.4|
|< $50,000||0.5||0.3 to 0.9||0.8||0.5 to 1.2|
|< $90,000||0.8||0.3 to 1.7||1.0||0.6 to 1.7|
|≥ $90,000||0.4||0.2 to 1.0||0.8||0.5 to 1.5|
|Unknown||1.5||0.7 to 3.5||1.5||0.8 to 2.8|
|Detroit v Los Angeles||4.8||1.7 to 9.7||1.8||0.9 to 3.6|
|Surgeon surgery volume||11.9||< .001||2.0||.367|
|Intermediate||4.2||1.8 to 17.9||1.1||0.6 to 2.4|
|Highest||9.6||3.5 to 53.6||2.0||0.8 to 5.1|
|Hospital breast cancer surgery volume||1.0||1.0 to 1.1||1.0||1.0 to 1.1|
|Selected surgeon based on reputation||3.4||1.9 to 6.2||2.0||1.3 to 3.1|
|Selected surgeon based on proximity||0.3||0.8 to 0.9||1.2||1.7 to 2.2|
|Referred to surgeon||0.5||0.3 to 0.9||1.3||0.9 to 2.0|
Abbreviations: NCI, National Cancer Institute; ACoS, American College of Surgeons; DCIS, ductal carcinoma in situ; HS, high school.
Supported by the National Cancer Institute (Grant No. RO1 CA8837-A1) to the University of Michigan. Supported in part with federal funds from the National Cancer Institute, National Institutes of Health, Department of Health and Human Services, under Contracts No. N01-PC-35139 and NO1-PC-65064. The collection of cancer incidence data used in this publication was supported by the California Department of Health Services as part of the statewide cancer reporting program mandated by California Health and Safety Code Section 103885.
The ideas and opinions expressed herein are those of the authors, and no endorsement by the State of California, Department of Health Services is intended or should be inferred.
Authors' disclosures of potential conflicts of interest and author contributions are found at the end of this article.
We thank the American College of Surgeons Cancer Department (Connie Bura and David Winchester, MD) for their support for the surgeon study.
|1.||Birkmeyer JD: Undertanding surgeon performance and improving patient outcomes. J Clin Oncol 22::2765,2004-2766, Link, Google Scholar|
|2.||Birkmeyer JD, Siewers AE, Finlayson EVA, et al: Hospital volume and surgical mortality in the United States. N Engl J Med 246::1128,2002-1137, Crossref, Google Scholar|
|3.||Birkmeyer JD, Stukel TA, Siewers AE, et al: Surgeon volume and operative mortality in the United States. N Engl J Med 349::2117,2003-2127, Crossref, Medline, Google Scholar|
|4.||Porter GA, Soskolne CL, Yakimets WW, et al: Surgeon-related factors and outcome in rectal cancer. Ann Surg 227::157,1998-167, Crossref, Medline, Google Scholar|
|5.||Hillner BE, Smith RJ, Desch CE: Hospital and physician volume or specialization and outcomes in cancer treatment: Importance in quality of cancer care. J Clin Oncol 18::2327,2000-2340, Link, Google Scholar|
|6.||Begg CB, Riedel ER, Bach PB, et al: Variations in morbidity after radical prostatectomy. N Engl J Med 346::1138,2002-1144, Crossref, Medline, Google Scholar|
|7.||Birkmeyer NJO, Goodney PP, Stukel TA, et al: Do cancer centers designated by the National Cancer Institute have better surgical outcomes? Cancer 103::435,2005-441, Crossref, Medline, Google Scholar|
|8.||Halm EA, Lee C, Chassin MR: Is volume related to outcome in health care? A systematic review and methodologic critique of the literature. Ann Intern Med 137::511,2002-520, Crossref, Medline, Google Scholar|
|9.||Katz SJ, Lantz PM, Janz NK, et al: Patterns and correlates of local therapy for women with ductal carcinoma in situ. J Clin Oncol 23::3001,2005-3007, Link, Google Scholar|
|10.||Katz SJ, Lantz PM, Janz NK, et al: The role of patient involvement in surgical treatment decisions for breast cancer. J Clin Oncol 23::5526,2005-5533, Link, Google Scholar|
|11.||Nattinger AB, Gottlieb MS, Veum J, et al: Geographic variation in the use of breast-conserving treatment for breast cancer. N Engl J Med 326::1102,1992-1107, Crossref, Medline, Google Scholar|
|12.||Bland KI, Scott-Conner CE, Menck H, et al: Axillary dissection in breast-conserving surgery for stage I and II breast cancer: A National Cancer Data Base study of patterns of omission and implications for survival. J Am Coll Surg 188::586,1999-589, Crossref, Medline, Google Scholar|
|13.||Foster RSJ, Farwell ME, Costanza MC: Breast-conserving surgery for breast cancer: Patterns of care in a geographic region and estimation of potential applicability. Ann Surg Oncol 2::275,1995-280, Crossref, Medline, Google Scholar|
|14.||Johantgen ME, Coffey RM, Harris DR, et al: Treating early-stage breast cancer: Hospital characteristics associated with breast-conserving surgery. Am J Public Health 85::1432,1995-1434, Crossref, Medline, Google Scholar|
|15.||Katz SJ, Lantz PM, Janz NK, et al: Surgeon perspectives on local therapy for breast cancer. Cancer 104::1854,2005-1861, Crossref, Medline, Google Scholar|
|16.||American College of Surgeons: What is an approved cancer program? Commission on cancer. Google Scholar|
|17.||Rogers WH: Regression standard errors in clustered samples. Stata Technical Bull 13::19,1993-23, Google Scholar|
|18.||Woolridge JM: Econometric Analysis of Cross Section and Panel Data . Cambridge, MA, MIT Press, 2002 Google Scholar|
|19.||Hawley ST, Hofer TP, Janz NK, et al: Correlates of between-surgeon variation in breast cancer treatments. Med Care 44::609,2006-616, Crossref, Medline, Google Scholar|
|20.||Institute of Medicine Report: Ensuring Quality Cancer Care. National Cancer Policy Board, Institute of Medicine Report and National Research Council, in Hewitt M, Simone JV (eds). Washington, DC, National Academy Press, , pp 97,1999 Google Scholar|