Revisions to Method 301: Field Validation of Pollutant Measurement Methods From Various Waste Media

Federal Register, Volume 83 Issue 54 (Tuesday, March 20, 2018)

Federal Register Volume 83, Number 54 (Tuesday, March 20, 2018)

Rules and Regulations

Pages 12118-12133

From the Federal Register Online via the Government Publishing Office www.gpo.gov

FR Doc No: 2018-05400

=======================================================================

-----------------------------------------------------------------------

ENVIRONMENTAL PROTECTION AGENCY

40 CFR Part 63

EPA-HQ-OAR-2016-0069; FRL-9975-62-OAR

RIN 2060-AT17

Revisions to Method 301: Field Validation of Pollutant Measurement Methods From Various Waste Media

AGENCY: Environmental Protection Agency (EPA).

ACTION: Final rule.

-----------------------------------------------------------------------

SUMMARY: The Environmental Protection Agency (EPA) is publishing editorial and technical revisions to the EPA's Method 301 ``Field Validation of Pollutant Measurement Methods from Various Waste Media'' to correct and update the method. In addition, the EPA is clarifying the regulatory applicability of Method 301 as well as its suitability for use with other regulations. The revisions include ruggedness testing for validation of test methods intended for application at multiple sources, determination of the limit of detection for all method validations, incorporating procedures for determining the limit of detection, revising the sampling requirements for the method comparison procedure, adding storage and sampling procedures for sorbent sampling systems, and clarifying acceptable statistical results for candidate test methods. We are also clarifying the applicability of Method 301 to our regulations and adding equations to clarify calculation of the correction factor, standard deviation, estimated variance of a validated test method, standard deviation of differences, and t-statistic for all validation approaches. We have also made minor changes in response to public comments. Changes made to the Method 301 field validation protocol under this action apply only to methods submitted to the EPA for approval after the effective date of this final rule.

DATES: The final rule is effective on March 20, 2018.

ADDRESSES: We have established a docket for this rulemaking under Docket ID Number EPA-HQ-OAR-2016-0069. All documents in the docket are listed on the https://www.regulations.gov website. Although listed in the index, some information is not publicly available, e.g., Confidential Business Information (CBI) or other information whose disclosure is restricted by statute. Certain other material, such as copyrighted material, is not placed on the internet and will be publicly available only in hard copy form. Publicly available docket materials are available electronically through https://www.regulations.gov.

FOR FURTHER INFORMATION CONTACT: Ms. Robin Segall, Office of Air Quality Planning and Standards, Air Quality Assessment Division (E143-

02), Environmental Protection Agency, Research Triangle Park, NC 27711; telephone number: (919) 541-0893; fax number: (919) 541-0516; email address: email protected.

SUPPLEMENTARY INFORMATION: The information in this preamble is organized as follows:

Table of Contents

I. General Information

  1. Does this action apply to me?

  2. Where can I get a copy of this document and other related information?

  3. Judicial Review and Administrative Reconsideration

    II. Background

    III. Summary of Final Amendments

  4. Technical Revisions

  5. Clarifying and Editorial Changes

    IV. Response to Comment

    V. Statutory and Executive Order Reviews

  6. Executive Order 12866: Regulatory Planning and Review and Executive Order 13563: Improving Regulation and Regulatory Review

  7. Executive Order 13771: Reducing Regulations and Controlling Regulatory Costs

  8. Paperwork Reduction Act (PRA)

  9. Regulatory Flexibility Act (RFA)

  10. Unfunded Mandates Reform Act (UMRA)

  11. Executive Order 13132: Federalism

  12. Executive Order 13175: Consultation and Coordination With Indian Tribal Governments

  13. Executive Order 13045: Protection of Children From Environmental Health Risks and Safety Risks

    I. Executive Order 13211: Actions That Significantly Affect Energy Supply, Distribution, or Use

  14. National Technology Transfer and Advancement Act (NTTAA) and 1 CFR Part 51

  15. Executive Order 12898: Federal Actions To Address Environmental Justice in Minority Populations and Low-Income Populations

    L. Congressional Review Act (CRA)

    I. General Information

  16. Does this action apply to me?

    Method 301 applies to you, under 40 CFR 63.7(f) or 40 CFR 65.158(a)(2)(iii), when you want to use an alternative to a required test method to meet an applicable requirement or when there is no required or validated test method. In addition, the validation procedures of Method 301 may be used as a tool for demonstration of the suitability of alternative test methods under 40 CFR 59.104 and 59.406, 40 CFR 60.8(b), and 40 CFR 61.13(h)(1)(ii). If you have questions regarding the applicability of the changes to Method 301, contact the person listed in the preceding FOR FURTHER INFORMATION CONTACT section.

  17. Where can I get a copy of this document and other related information?

    In addition to being available in the docket, an electronic copy of the method revisions is available on the Air Emission Measurement Center (EMC) website at https://www.epa.gov/emc/. The EMC provides information regarding stationary source air emissions test methods and procedures.

  18. Judicial Review and Administrative Reconsideration

    Under Clean Air Act (CAA) section 307(b)(1), judicial review of this final action is available only by filing a petition for review in the United States Court of Appeals for the District of Columbia Circuit by May 21, 2018. Under CAA section 307(b)(2), the requirements established by these final rules may not be challenged separately in any civil or criminal proceedings brought by the EPA to enforce the requirements.

    Section 307(d)(7)(B) of the CAA provides that ``only an objection to a rule or procedure which was raised with reasonable specificity during the period for public comment (including any public hearing) may be raised during judicial review.'' This section also provides a mechanism for the EPA to reconsider the rule ``if the person raising an objection can demonstrate to the Administrator that it was impracticable to raise such objection within the period for public comment or if the grounds for such objection arose after the period for public comment (but within the time specified for judicial review) and if such objection is of central relevance to the outcome of the rule.'' Any person seeking to make such a demonstration should submit a

    Page 12119

    Petition for Reconsideration to the Office of the Administrator, U.S. EPA, Room 3000, WJC Building, 1200 Pennsylvania Ave. NW, Washington, DC 20460, with a copy to both the person listed in the preceding FOR FURTHER INFORMATION CONTACT section, and the Associate General Counsel for the Air and Radiation Law Office, Office of General Counsel (Mail Code 2344A), U.S. EPA, 1200 Pennsylvania Ave. NW, Washington, DC 20460.

    II. Background

    The EPA proposed revisions to Method 301 on December 2, 2016 (81 FR 87003). The EPA received one comment letter on the proposed revisions to EPA Method 301, which is addressed in Section IV of this preamble.

    The EPA originally published Method 301 (appendix A to 40 CFR part 63, Test Methods) on December 29, 1992 (57 FR 61970), as a field validation protocol method to be used to validate new test methods for hazardous air pollutants (HAP) in support of the Early Reductions Program of part 63 when existing test methods were inapplicable. On March 16, 1994, the EPA incorporated Method 301 into 40 CFR 63.7 (59 FR 12430) to provide procedures for validating a candidate test method as an alternative to a test method specified in a standard or for use where no test method is provided in a standard.

    Method 301 specifies procedures for determining and documenting the bias and precision of a test method that is a candidate for use as an alternative to a test method specified in an applicable regulation. Method 301 has also been required for validating test methods to be used in demonstrating compliance with a regulatory standard in the absence of a validated test method. Method 301 is required for these purposes under 40 CFR 63.7(f) and 40 CFR 65.158(a)(2)(iii), and is an appropriate tool for demonstration and validation of alternative methods under 40 CFR 59.104 and 59.406, 40 CFR 60.8(b), and 40 CFR 61.13(h)(1)(ii). The procedures specified in Method 301 are applicable to various media types (e.g., sludge, exhaust gas, wastewater).

    Bias (or systemic error) is established by comparing measurements made using a candidate test method against reference values, either reference materials or a validated test method. Where needed, a correction factor for source-specific application of the method is employed to eliminate/minimize bias. This correction factor is established from data obtained during the validation test. Methods that have bias correction factors outside a specified range are considered unacceptable. Method precision (or random error) must be demonstrated to be as precise as the validated method for acceptance or less than or equal to 20 percent when the candidate method is being evaluated using reference materials.

    Neither the Method as originally established on December 29, 1992, nor the subsequent revision on May 18, 2011 (76 FR 28664), have distinguished requirements for single-source applications of a candidate method from those that apply at multiple sources. The revisions promulgated in this action recognize that requirements related to bias and ruggedness testing should differ between single-

    source and multiple-source application of an alternative method. Additionally, through our reviews of submitted Method 301 data packages and response to questions from industry, technology vendors, and testing organizations seeking to implement the method, we recognized that there was confusion with the specific testing requirements and the statistical calculations associated with each of the three ``Sampling Procedures.'' To improve the readability and application of Method 301, we proposed and are finalizing minor edits throughout the method text to clarify the descriptions and requirements for assessing bias and precision for each ``Sampling Procedure'' and have added equations to ensure that required calculations and acceptance criteria for each of the three sampling approaches are clear.

    III. Summary of Final Amendments

    In this section, we discuss the final amendments to Method 301, the changes since proposal, and the rationale for the changes. We are finalizing clarifications to the regulatory applicability of Method 301 and its suitability for use with other regulations, as well as finalizing technical revisions and editorial changes intended to clarify and update the requirements and procedures specified in Method 301.

  19. Technical Revisions

    1. Applicability of Ruggedness Testing and Limit of Detection Determination

      In this action, we are amending sections 3.1 and 14.0 to require ruggedness testing when using Method 301 to validate a candidate test method intended for application to multiple sources. Ruggedness testing is optional for validation of methods intended for single-source applications. We are also amending sections 3.1 and 15.0 to require determination of the limit of detection (LOD) for validation of all methods (i.e., those intended for both single-source and multi-source application). Additionally, we are clarifying the LOD definition in section 15.1.

      Ruggedness testing of a test method is a laboratory study to determine the sensitivity of the method by measuring its capacity to remain unaffected by small, but deliberate variations in method parameters such as sample collection rate and sample recovery temperature to provide an indication of its reliability during normal usage. Requiring ruggedness testing and determination of the LOD for validation of a candidate test method that is intended for use at multiple sources will further inform the EPA's determination of whether the candidate test method is valid across a range of source emission matrices, varying method parameters, and conditions. Additionally, conducting an LOD determination for both single- and multi-source validations will account for the sensitivity of the candidate test method to ensure it meets applicable regulatory requirements.

    2. Limit of Detection Procedures

      In this action, the EPA is finalizing revisions to the requirements for determining the LOD specified in section 15.2 and Table 301-5 (Procedure I) of Method 301 to reference the procedures for determining the method detection limit (MDL) in 40 CFR part 136, appendix B, as revised on August 28, 2017 (82 FR 40836), which addresses laboratory blank contamination and accounts for intra-laboratory variability. Procedure I of Table 301-5 of Method 301 is used for determining an LOD when an analyte in a sample matrix is collected prior to an analytical measurement or the estimated LOD is no more than twice the calculated LOD. For the purposes of Method 301, LOD will now be equivalent to the calculated MDL determined using the procedures specified in 40 CFR part 136, appendix B.

      When EPA proposed revisions to Method 301 (81 FR 87003; December 2, 2016), we noted in the preamble that the Method 301 revisions were referencing proposed revisions to the MDL calculation procedures of 40 CFR part 136, appendix B. At that time, we stated, ``If the revisions to 40 CFR part 136, appendix B are finalized as proposed prior to a final action on this Method 301 proposal, we will cross-reference appendix B. If appendix B is finalized before this action and the

      Page 12120

      revisions do not incorporate the procedures as described above, the EPA intends to incorporate the specific procedures for determining the LOD in the final version of Method 301 consistent with this proposal.'' The appendix B provisions of 40 CFR part 136 were recently finalized with the Clean Water Act Methods Update Rule on August 28, 2017 (82 FR 40836). As a result of comments on the proposed Methods Update rule, there were minor clarifications, but ``no significant revisions were made to the proposed MDL procedure'' of appendix B as stated in Section III.I of the preamble to that rule. Because the Methods Update rule containing the MDL procedure was finalized with no significant changes, and we have determined that the final requirements of appendix B are appropriate for the CAA programs at issue, we are cross-referencing the finalized MDL determination calculation procedure of 40 CFR part 136, appendix B, in section 15.2 and Table 301-5 of Method 301.

    3. Storage and Sampling Procedures

      In this action, we are finalizing the proposed revisions to sections 9.0 and 11.1.3 and Table 301-1 of Method 301 to require, at a minimum, six sets of quadruplicate samples (a total of 24 samples) for comparison of a candidate method against a validated method rather than four sets of quadruplicate samples or nine sets of paired samples, as currently required. These revisions ensure that the bias and precision requirements are consistent between the various sampling approaches in the method and decreases the amount of uncertainty in the calculations for bias and precision when comparing an alternative or candidate test method with a validated method. Bias and precision (standard deviation and variance) are inversely related to the number of sampling trains (sample results) used to estimate the difference between the alternative test method and the validated method. As the number of trains increases, the uncertainty in the bias and precision estimates decreases. Larger data sets provide better estimates of the standard deviation or variance and the distribution of the data. The revision to collect a total of 24 samples when using the comparison against a validated method approach is also consistent with the number of samples required for both the analyte spiking and the isotopic spiking approaches. The 12 samples collected when conducting the isotopic spiking approach are equivalent to the 24 samples collected using the analyte spiking approach because the isotopic labelling of the spike allows each of the 12 samples to yield two results (one result for an unspiked sample, and one result for a spiked sample).

      For validations conducted by comparing the candidate test method to a validated test method, we are also finalizing the following additions: (1) Storage and sampling procedures for sorbent systems requiring thermal desorption to Table 301-2 of Method 301, and (2) a new Table 301-4 of Method 301 to provide a look-up table of F values for the one-sided confidence level used in assessing the precision of the candidate test method. We also are amending the reference list in section 18.0 to include the source of the F values in Table 301-4.

    4. Bias Criteria for Multi-Source Versus Single-Source Validation

      In this action, we are finalizing revisions that clarify sections 8.0, 10.3, and 11.1.3 of Method 301 to specify that candidate test methods intended for use at multiple sources must have a bias less than or equal to 10 percent. Candidate test methods with a bias greater than 10 percent, but less than 30 percent, are applicable only at the source at which the validation testing was conducted, and data collected in the future must be adjusted for bias using a source-specific correction factor. A single-source correction factor is not appropriate for use at multiple sources. This change provides flexibility for source-specific Method 301 application while limiting the acceptance criteria for use of the method at multiple sources.

    5. Relative Standard Deviation Assessment

      In sections 9.0 and 12.2 of Method 301, we are finalizing language regarding the interpretation of the relative standard deviation (RSD) when determining the precision of a candidate test method using the analyte spiking or isotopic spiking procedures. For a test method to be acceptable, we proposed that the RSD of a candidate test method must be less than or equal to 20 percent. Accordingly, we are removing the sampling provisions for cases where the RSD is greater than 20 percent, but less than 50 percent. Poor precision makes it difficult to detect potential bias in a test method. For this reason, we proposed and are now finalizing an acceptance criterion of less than or equal to 20 percent for analyte and isotopic spiking sampling procedures.

    6. Applicability of Method 301

      Although 40 CFR 65.158(a)(2)(iii) specifically cross-references Method 301, Method 301 formerly did not reference part 65. For parts 63 and 65, Method 301 must be used for establishing an alternative test method. Thus, in this action, we are finalizing language that clarifies that Method 301 is applicable to both parts 63 and 65 and that Method 301 may be used for validating alternative test methods under the following parts of Title 40 of the CAA:

      Part 59 (National Volatile Organic Compound Emission Standards for Consumer and Commercial Products).

      Part 60 (Standards of Performance for New Stationary Sources).

      Part 61 (National Emission Standards for Hazardous Air Pollutants).

      We believe that the Method 301 procedures for determining bias and precision provide a suitable technical approach for assessing candidate or alternative test methods for use under these regulatory parts because the testing provisions are very similar to those under parts 63 and 65. To accommodate the expanded applicability and suitability, we are revising the references in sections 2.0, 3.2, 5.0, 13.0, 14.0, and 16.1 of Method 301 to refer to all five regulatory parts.

    7. Equation Additions

      In this action, we are clarifying the procedures in Method 301 by adding the following equations:

      Equation 301-8 in section 10.3 for calculating the correction factor.

      Equation 301-11 in section 11.1.1 and Equation 301-19 in section 12.1.1 for calculating the numerical bias.

      Equation 301-12 in section 11.1.2 and Equation 301-20 in section 12.1.2 for determining the standard deviation of differences.

      Equation 301-13 in section 11.1.3 and Equation 301-21 in section 12.1.3 for calculating the t-statistic.

      Equation 301-15 in section 11.2.1 to estimate the variance of the validated test method.

      Equation 301-23 in section 12.2 for calculating the standard deviation.

      We also are revising the denominator of Equation 301-22 to use the variable ``CS'' rather than ``VS.'' Additionally, we are revising the text of Method 301, where needed, to list and define all variables used in the method equations. These changes are intended to improve the readability of the method and ensure that required calculations and acceptance criteria for each of the three validation approaches in Method 301 are clear.

  20. Clarifying and Editorial Changes

    In this action, we are applying minor edits throughout the text of Method 301 to clarify the descriptions and

    Page 12121

    requirements for assessing bias and precision, to ensure consistency when referring to citations within the method, to renumber equations and tables (where necessary), and to remove passive voice.

    In addition, we are clarifying several definitions in section 3.2. In the definition of ``Paired sampling system,'' we are modifying the definition to provide that a paired sampling system is collocated with respect to sampling time and location. For the definition of ``Quadruplet sampling system,'' we are replacing the term ``Quadruplet'' with ``Quadruplicate'' and adding descriptive text to the definition to provide examples of replicate samples. We are also making companion edits throughout the method text to reflect the change in terminology from ``quadruplet'' to ``quadruplicate.'' Additionally, we are revising the definition of ``surrogate compound'' to clarify that a surrogate compound must be distinguishable from other compounds being measured by the candidate method.

    We are also replacing the term ``alternative test method'' with ``candidate test method'' in section 3.2 and throughout Method 301 to maintain consistency when referring to a test method that is subject to the validation procedures specified in Method 301.

    Additionally, the EPA is making the following updates and corrections:

    Updating the address for submitting waivers in section 17.2.

    Correcting the t-value for four degrees of freedom in Table 301-3 ``Critical Values of t'' as well as expanding the table to include t-values up to 20 degrees of freedom. We originally proposed expanding the table to only 11 degrees of freedom, but recognized that users may occasionally want to use significantly more than the minimum number of test runs and samples.

    Including a Table 301-4 ``Upper Critical Values of the F Distribution'' and an associated reference in section 18.0 to provide method users with convenient access to the F values needed to perform the required statistical calculations in Method 301. For the same reason that we originally included the Table 301-3 ``Critical Values of t'' in the 2011 revisions to Method 301, we recognized in finalizing the proposed revisions that we should additionally include a table for the F distribution.

    IV. Response to Comment

    We received one public comment letter submitted on behalf of the Utility Air Regulatory Group presenting two comments.

    Comment: The commenter notes that section 6.4.1 of Method 301 requires that the probe tips for each of the paired sampling probes be 2.5 centimeters away from each other with a pitot tube on the outside of each probe and claims that the collocation criteria of Method 301 are infeasible for many currently accepted test methods including Method 30B. The commenter states that if the outside diameter of the validated test method probe is 3 inches (as is common for Method 30B probes), it is impossible for a second probe of equal diameter to meet the probe tip location requirement even if the two probes are immediately adjacent. In addition, the commenter claims that if the sample port being used to perform the validation testing has an inside diameter of 4 inches, a common port size, then two paired sampling probes with an outside diameter of 3 inches cannot physically fit into the sample port making collocation impossible. The commenter notes that sections 6.4.1 and 17.1 provide for some latitude for waivers of the probe placement requirements, but believes the waiver language is inadequate and recommends that EPA provide alternative probe placements that are practically achievable.

    Response: We recommend that organizations conducting validation testing seek to use 6-inch ports, which are fairly common. Should 6-

    inch ports not be available at a source where validation testing must be conducted, then they should be installed if practicable. However, we recognize that there still may be instances where the sampling probes requirements are not feasible in a specific situation. Current Method 301 addresses this situation by providing in section 6.4.1 for Administrator approval of a validation request with other paired arrangements for the pitot tube. While we do not agree with the commenter that EPA should provide alternative probe tip and pitot tube placement options within Method 301, we do appreciate that the Administrator approval language provided in the method could confirm additional flexibility with regard to both pitot tube and probe tip placement and we have revised the language of section 6.4.1 and relocated it to section 6.4 to clarify that it is applicable to all aspects of sampling probe/pitot placement.

    Comment: The commenter points out that section 8.0 of Method 301 specifies the bias of a candidate method as compared to a reference method be no more than 10 percent. The commenter contends this criterion is inadequate and unachievable at low concentrations, which are now more frequently occurring, and recommends that the Method 301 bias criterion be modified to include an alternative performance criterion based on an absolute difference rather than a percent of the measurement to address field validation measurements made at low levels.

    Response: The EPA disagrees with the commenter that the Method 301 bias criterion should be modified to include an alternative performance criterion based on an absolute difference rather than a percent of the measurement. It is important to understand that the 10 percent bias criterion applies only to candidate methods that will be applied to multiple sources. A candidate method to be applied to a single source is allowed a bias up to 30 percent when coupled with a source-specific bias correction factor if the bias exceeds 10 percent. Though we recognize that emission levels are decreasing, when a candidate method is being validated for broad applicability to multiple sources, there is the opportunity to optimize field validation by conducting testing at sources with relatively higher emissions. As Method 301 is designed for validation of methods for many pollutants emitted from a large range of source categories under many different rules, EPA believes it would, at best, be extremely difficult to specify generic alternative criteria for validation at low levels. Such issues are part of the rationale for the flexibility under section 17.0 of Method 301; with this language EPA maintains the ability to waive some or all the procedures of Method 301 if it can be demonstrated to the Administrator's satisfaction that the bias and precision of a candidate method are suitable for the stated application. To clarify that these provisions apply to all required facets of Method 301, we have revised section 17.2 to include the LOD determination along with bias and precision.

    V. Statutory and Executive Order Reviews

  21. Executive Order 12866: Regulatory Planning and Review and Executive Order 13563: Improving Regulation and Regulatory Review

    This action is not a significant regulatory action and was, therefore, not submitted to the Office of Management and Budget (OMB) for review.

  22. Executive Order 13771: Reducing Regulations and Controlling Regulatory Costs

    This action is not an Executive Order 13771 regulatory action because this

    Page 12122

    action is not significant under Executive Order 12866.

  23. Paperwork Reduction Act (PRA)

    This action does not impose an information collection burden under the PRA. The revisions in this action to Method 301 do not add information collection requirements, but make corrections and updates to existing testing methodology.

  24. Regulatory Flexibility Act (RFA)

    I certify that this action will not have a significant economic impact on a substantial number of small entities under the RFA. This action will not impose any requirements on small entities. In making this determination, the impact of concern is any significant adverse economic impact on small entities. An agency may certify that a rule will not have a significant economic impact on a substantial number of small entities if the rule relieves regulatory burden, has no net burden or otherwise has a positive economic effect on the small entities subject to the rule. The revisions to Method 301 do not impose any requirements on regulated entities beyond those specified in the current regulations and they do not change any emission standard. We have therefore concluded that this action will have no net regulatory burden for all directly regulated small entities.

  25. Unfunded Mandates Reform Act (UMRA)

    This action does not contain any unfunded mandate of $100 million or more as described in UMRA, 2 U.S.C. 1531-1538. The action imposes no enforceable duty on any state, local, or tribal governments or the private sector.

  26. Executive Order 13132: Federalism

    This action does not have federalism implications. It will not have substantial direct effects on the states, on the relationship between the national government and the states, or on the distribution of power and responsibilities among the various levels of government.

  27. Executive Order 13175: Consultation and Coordination With Indian Tribal Governments

    This action does not have tribal implications, as specified in Executive Order 13175. This action corrects and updates the existing procedures specified in Method 301. Thus, Executive Order 13175 does not apply to this action.

  28. Executive Order 13045: Protection of Children From Environmental Health Risks and Safety Risks

    The EPA interprets Executive Order 13045 as applying only to those regulatory actions that concern environmental health or safety risks that the EPA has reason to believe may disproportionately affect children, per the definition of ``covered regulatory action'' in section 2-202 of the Executive Order. This action is not subject to Executive Order 13045 because it does not concern an environmental health risk or safety risk.

    I. Executive Order 13211: Actions That Significantly Affect Energy Supply, Distribution, or Use

    This action is not subject to Executive Order 13211, because it is not a significant regulatory action under Executive Order 12866.

  29. National Technology Transfer and Advancement Act (NTTAA) and 1 CFR part 51

    This action involves technical standards. The agency previously identified ASTM D4855-97 (Standard Practice for Comparing Test Methods) as being potentially applicable in previous revisions of Method 301, but determined that the use of ASTM D4855-97 was impractical (section V in 76 FR 28664, May 18, 2011).

  30. Executive Order 12898: Federal Actions To Address Environmental Justice in Minority Populations and Low-Income Populations

    The EPA believes that this action is not subject to Executive Order 12898 (59 FR 7629, February 16, 1994) because it does not establish an environmental health or safety standard. This action makes corrections and updates to an existing protocol for assessing the precision and accuracy of alternative test methods to ensure they are comparable to the methods otherwise required; thus, it does not modify or affect the impacts to human health or the environment of any standards for which it may be used.

    L. Congressional Review Act (CRA)

    This action is subject to the CRA, and the EPA will submit a rule report to each House of the Congress and to the Comptroller General of the United States. This action is not a ``major rule'' as defined by 5 U.S.C. 804(2).

    List of Subjects in 40 CFR Part 63

    Environmental protection, Air pollution control, Alternative test method, EPA Method 301, Field validation, Hazardous air pollutants.

    Dated: March 8, 2018.

  31. Scott Pruitt,

    Administrator.

    For the reasons stated in the preamble, the EPA amends title 40, chapter I of the Code of Federal Regulations as follows:

    PART 63--AMENDED

    0

    1. The authority citation for part 63 continues to read as follows:

      Authority: 42 U.S.C. 7401 et seq.

      0

    2. Appendix A to part 63 is amended by revising Method 301 to read as follows:

      Appendix A to Part 63--Test Methods

      Method 301--Field Validation of Pollutant Measurement Methods From Various Waste Media

      Sec.

      Using Method 301

      1.0 What is the purpose of Method 301?

      2.0 What approval must I have to use Method 301?

      3.0 What does Method 301 include?

      4.0 How do I perform Method 301?

      Reference Materials

      5.0 What reference materials must I use?

      Sampling Procedures

      6.0 What sampling procedures must I use?

      7.0 How do I ensure sample stability?

      Determination of Bias and Precision

      8.0 What are the requirements for bias?

      9.0 What are the requirements for precision?

      10.0 What calculations must I perform for isotopic spiking?

      11.0 What calculations must I perform for comparison with a validated method?

      12.0 What calculations must I perform for analyte spiking?

      13.0 How do I conduct tests at similar sources?

      Optional Requirements

      14.0 How do I use and conduct ruggedness testing?

      15.0 How do I determine the Limit of Detection for the candidate test method?

      Other Requirements and Information

      16.0 How do I apply for approval to use a candidate test method?

      17.0 How do I request a waiver?

      18.0 Where can I find additional information?

      19.0 Tables.

      Using Method 301

      1.0 What is the purpose of Method 301?

      Method 301 provides a set of procedures for the owner or operator of an affected source to validate a candidate test method as an alternative to a required test method based on established precision and bias criteria.

      Page 12123

      These validation procedures are applicable under 40 CFR part 63 or 65 when a test method is proposed as an alternative test method to meet an applicable requirement or in the absence of a validated method. Additionally, the validation procedures of Method 301 are appropriate for demonstration of the suitability of alternative test methods under 40 CFR parts 59, 60, and 61. If, under 40 CFR part 63 or 60, you choose to propose a validation method other than Method 301, you must submit and obtain the Administrator's approval for the candidate validation method.

      2.0 What approval must I have to use Method 301?

      If you want to use a candidate test method to meet requirements in a subpart of 40 CFR part 59, 60, 61, 63, or 65, you must also request approval to use the candidate test method according to the procedures in Section 16 of this method and the appropriate section of the part (Sec. 59.104, Sec. 59.406, Sec. 60.8(b), Sec. 61.13(h)(1)(ii), Sec. 63.7(f), or Sec. 65.158(a)(2)(iii)). You must receive the Administrator's written approval to use the candidate test method before you use the candidate test method to meet the applicable federal requirements. In some cases, the Administrator may decide to waive the requirement to use Method 301 for a candidate test method to be used to meet a requirement under 40 CFR part 59, 60, 61, 63, or 65 in absence of a validated test method. Section 17 of this method describes the requirements for obtaining a waiver.

      3.0 What does Method 301 include?

      3.1 Procedures. Method 301 includes minimum procedures to determine and document systematic error (bias) and random error (precision) of measured concentrations from exhaust gases, wastewater, sludge, and other media. Bias is established by comparing the results of sampling and analysis against a reference value. Bias may be adjusted on a source-specific basis using a correction factor and data obtained during the validation test. Precision may be determined using a paired sampling system or quadruplicate sampling system for isotopic spiking. A quadruplicate sampling system is required when establishing precision for analyte spiking or when comparing a candidate test method to a validated method. If such procedures have not been established and verified for the candidate test method, Method 301 contains procedures for ensuring sample stability by developing sample storage procedures and limitations and then testing them. Method 301 also includes procedures for ruggedness testing and determining detection limits. The procedures for ruggedness testing and determining detection limits are required for candidate test methods that are to be applied to multiple sources and optional for candidate test methods that are to be applied at a single source.

      3.2 Definitions.

      Affected source means an affected source as defined in the relevant part and subpart under Title 40 (e.g., 40 CFR parts 59, 60, 61, 63, and 65).

      Candidate test method means the sampling and analytical methodology selected for field validation using the procedures described in Method 301. The candidate test method may be an alternative test method under 40 CFR part 59, 60, 61, 63, or 65.

      Paired sampling system means a sampling system capable of obtaining two replicate samples that are collected as closely as possible in sampling time and sampling location (collocated).

      Quadruplicate sampling system means a sampling system capable of obtaining four replicate samples (e.g., two pairs of measured data, one pair from each method when comparing a candidate test method against a validated test method, or analyte spiking with two spiked and two unspiked samples) that are collected as close as possible in sampling time and sampling location.

      Surrogate compound means a compound that serves as a model for the target compound(s) being measured (i.e., similar chemical structure, properties, behavior). The surrogate compound can be distinguished by the candidate test method from the compounds being analyzed.

      4.0 How do I perform Method 301?

      First, you use a known concentration of an analyte or compare the candidate test method against a validated test method to determine the bias of the candidate test method. Then, you collect multiple, collocated simultaneous samples to determine the precision of the candidate test method. Additional procedures, including validation testing over a broad range of concentrations over an extended time period are used to expand the applicability of a candidate test method to multiple sources. Sections 5.0 through 17.0 of this method describe the procedures in detail.

      Reference Materials

      5.0 What reference materials must I use?

      You must use reference materials (a material or substance with one or more properties that are sufficiently homogenous to the analyte) that are traceable to a national standards body (e.g., National Institute of Standards and Technology (NIST)) at the level of the applicable emission limitation or standard that the subpart in 40 CFR part 59, 60, 61, 63, or 65 requires. If you want to expand the applicable range of the candidate test method, you must conduct additional test runs using analyte concentrations higher and lower than the applicable emission limitation or the anticipated level of the target analyte. You must obtain information about your analyte according to the procedures in Sections 5.1 through 5.4 of this method.

      5.1 Exhaust Gas Test Concentration. You must obtain a known concentration of each analyte from an independent source such as a specialty gas manufacturer, specialty chemical company, or chemical laboratory. You must also obtain the manufacturer's certification of traceability, uncertainty, and stability for the analyte concentration.

      5.2 Tests for Other Waste Media. You must obtain the pure liquid components of each analyte from an independent manufacturer. The manufacturer must certify the purity, traceability, uncertainty, and shelf life of the pure liquid components. You must dilute the pure liquid components in the same type medium or matrix as the waste from the affected source.

      5.3 Surrogate Analytes. If you demonstrate to the Administrator's satisfaction that a surrogate compound behaves as the analyte does, then you may use surrogate compounds for highly toxic or reactive compounds. A surrogate may be an isotope or compound that contains a unique element (e.g., chlorine) that is not present in the source or a derivation of the toxic or reactive compound if the derivative formation is part of the method's procedure. You may use laboratory experiments or literature data to show behavioral acceptability.

      5.4 Isotopically-Labeled Materials. Isotope mixtures may contain the isotope and the natural analyte. The concentration of the isotopically-labeled analyte must be more than five times the concentration of the naturally-occurring analyte.

      Sampling Procedures

      6.0 What sampling procedures must I use?

      You must determine bias and precision by comparison against a validated test method using isotopic spiking or using analyte spiking (or the equivalent). Isotopic spiking can only be

      Page 12124

      used with candidate test methods capable of measuring multiple isotopes simultaneously such as test methods using mass spectrometry or radiological procedures. You must collect samples according to the requirements specified in Table 301-1 of this method. You must perform the sampling according to the procedures in Sections 6.1 through 6.4 of this method.

      6.1 Isotopic Spiking. Spike all 12 samples with isotopically-

      labelled analyte at an analyte mass or concentration level equivalent to the emission limitation or standard specified in the applicable regulation. If there is no applicable emission limitation or standard, spike the analyte at the expected level of the samples. Follow the applicable spiking procedures in Section 6.3 of this method.

      6.2 Analyte Spiking. In each quadruplicate set, spike half of the samples (two out of the four samples) with the analyte according to the applicable procedure in Section 6.3 of this method. You should spike at an analyte mass or concentration level equivalent to the emission limitation or standard specified in the applicable regulation. If there is no applicable emission limitation or standard, spike the analyte at the expected level of the samples. Follow the applicable spiking procedures in Section 6.3 of this method.

      6.3 Spiking Procedure.

      6.3.1 Gaseous Analyte with Sorbent or Impinger Sampling Train. Sample the analyte being spiked (in the laboratory or preferably in the field) at a mass or concentration that is approximately equivalent to the applicable emission limitation or standard (or the expected sample concentration or mass where there is no standard) for the time required by the candidate test method, and then sample the stack gas stream for an equal amount of time. The time for sampling both the analyte and stack gas stream should be equal; however, you must adjust the sampling time to avoid sorbent breakthrough. You may sample the stack gas and the gaseous analyte at the same time. You must introduce the analyte as close to the tip of the sampling probe as possible.

      6.3.2 Gaseous Analyte with Sample Container (Bag or Canister). Spike the sample containers after completion of each test run with an analyte mass or concentration to yield a concentration approximately equivalent to the applicable emission limitation or standard (or the expected sample concentration or mass where there is no standard). Thus, the final concentration of the analyte in the sample container would be approximately equal to the analyte concentration in the stack gas plus the equivalent of the applicable emission standard (corrected for spike volume). The volume amount of spiked gas must be less than 10 percent of the sample volume of the container.

      6.3.3 Liquid or Solid Analyte with Sorbent or Impinger Trains. Spike the sampling trains with an amount approximately equivalent to the mass or concentration in the applicable emission limitation or standard (or the expected sample concentration or mass where there is no standard) before sampling the stack gas. If possible, do the spiking in the field. If it is not possible to do the spiking in the field, you must spike the sampling trains in the laboratory.

      6.3.4 Liquid and Solid Analyte with Sample Container (Bag or Canister). Spike the containers at the completion of each test run with an analyte mass or concentration approximately equivalent to the applicable emission limitation or standard in the subpart (or the expected sample concentration or mass where there is no standard).

      6.4 Probe Placement and Arrangement for Stationary Source Stack or Duct Sampling. To sample a stationary source, you must place the paired or quadruplicate probes according to the procedures in this subsection. You must place the probe tips in the same horizontal plane. Section 17.1 of Method 301 describes conditions for waivers. For example, the Administrator may approve a validation request where other paired arrangements for the probe tips or pitot tubes (where required) are used.

      6.4.1 Paired Sampling Probes. For paired sampling probes, the first probe tip should be 2.5 centimeters (cm) from the outside edge of the second probe tip, with a pitot tube on the outside of each probe.

      6.4.2 Quadruplicate Sampling Probes. For quadruplicate sampling probes, the tips should be in a 6.0 cm x 6.0 cm square area measured from the center line of the opening of the probe tip with a single pitot tube, where required, in the center of the probe tips or two pitot tubes, where required, with their location on either side of the probe tip configuration. Section 17.1 of Method 301 describes conditions for waivers. For example, you must propose an alternative arrangement whenever the cross-sectional area of the probe tip configuration is approximately five percent or more of the stack or duct cross-sectional area.

      7.0 How do I ensure sample stability?

      7.1 Developing Sample Storage and Threshold Procedures. If the candidate test method includes well-established procedures supported by experimental data for sample storage and the time within which the collected samples must be analyzed, you must store the samples according to the procedures in the candidate test method and you are not required to conduct the procedures specified in Section 7.2 or 7.3 of this method. If the candidate test method does not include such procedures, your candidate method must include procedures for storing and analyzing samples to ensure sample stability. At a minimum, your proposed procedures must meet the requirements in Section 7.2 or 7.3 of this method. The minimum duration between sample collection and storage must be as soon as possible, but no longer than 72 hours after collection of the sample. The maximum storage duration must not be longer than 2 weeks.

      7.2 Storage and Sampling Procedures for Stack Test Emissions. You must store and analyze samples of stack test emissions according to Table 301-2 of this method. You may reanalyze the same sample at both the minimum and maximum storage durations for: (1) Samples collected in containers such as bags or canisters that are not subject to dilution or other preparation steps, or (2) impinger samples not subjected to preparation steps that would affect stability of the sample such as extraction or digestion. For candidate test method samples that do not meet either of these criteria, you must analyze one of a pair of replicate samples at the minimum storage duration and the other replicate at the proposed storage duration but no later than 2 weeks of the initial analysis to identify the effect of storage duration on analyte samples. If you are using the isotopic spiking procedure, then you must analyze each sample for the spiked analyte and the native analyte.

      7.3 Storage and Sampling Procedures for Testing Other Waste Media (e.g., Soil/Sediment, Solid Waste, Water/Liquid). You must analyze one of each pair of replicate samples (half the total samples) at the minimum storage duration and the other replicate (other half of samples) at the maximum storage duration or within 2 weeks of the initial analysis to identify the effect of storage duration on analyte samples. The minimum time period between collection and storage should be as soon as possible, but no longer than 72 hours after collection of the sample.

      7.4 Sample Stability. After you have conducted sampling and analysis

      Page 12125

      according to Section 7.2 or 7.3 of this method, compare the results at the minimum and maximum storage durations. Calculate the difference in the results using Equation 301-1.

      GRAPHIC TIFF OMITTED TR20MR18.000

      Where:

      di = Difference between the results of the ith replicate pair of samples.

      Rmini = Results from the ith replicate sample pair at the minimum storage duration.

      Rmaxi = Results from the ith replicate sample pair at the maximum storage duration.

      For single samples that can be reanalyzed for sample stability assessment (e.g., bag or canister samples and impinger samples that do not require digestion or extraction), the values for Rmini and Rmaxi will be obtained from the same sample rather than replicate samples.

      7.4.1 Standard Deviation. Determine the standard deviation of the paired samples using Equation 301-2.

      GRAPHIC TIFF OMITTED TR20MR18.001

      Where:

      SDd = Standard deviation of the differences of the paired samples.

      di = Difference between the results of the ith replicate pair of samples.

      dm = Mean of the paired sample differences.

      n = Total number of paired samples.

      7.4.2 T Test. Test the difference in the results for statistical significance by calculating the t-statistic and determining if the mean of the differences between the results at the minimum storage duration and the results after the maximum storage duration is significant at the 95 percent confidence level and n-1 degrees of freedom. Calculate the value of the t-statistic using Equation 301-3.

      GRAPHIC TIFF OMITTED TR20MR18.002

      Where:

      t = t-statistic.

      dm = The mean of the paired sample differences.

      SDd = Standard deviation of the differences of the paired samples.

      n = Total number of paired samples.

      Compare the calculated t-statistic with the critical value of the t-statistic from Table 301-3 of this method. If the calculated t-value is less than the critical value, the difference is not statistically significant. Therefore, the sampling, analysis, and sample storage procedures ensure stability, and you may submit a request for validation of the candidate test method. If the calculated t-value is greater than the critical value, the difference is statistically significant, and you must repeat the procedures in Section 7.2 or 7.3 of this method with new samples using a shorter proposed maximum storage duration or improved handling and storage procedures.

      Determination of Bias and Precision

      8.0 What are the requirements for bias?

      You must determine bias by comparing the results of sampling and analysis using the candidate test method against a reference value. The bias must be no more than 10 percent for the candidate test method to be considered for application to multiple sources. A candidate test method with a bias greater than 10 percent and less than or equal to 30 percent can only be applied on a source-specific basis at the facility at which the validation testing was conducted. In this case, you must use a correction factor for all data collected in the future using the candidate test method. If the bias is more than 30 percent, the candidate test method is unacceptable.

      9.0 What are the requirements for precision?

      You may use a paired sampling system or a quadruplicate sampling system to establish precision for isotopic spiking. You must use a quadruplicate sampling system to establish precision for analyte spiking or when comparing a candidate test method to a validated method. If you are using analyte spiking or isotopic spiking, the precision, expressed as the relative standard deviation (RSD) of the candidate test method, must be less than or equal to 20 percent. If you are comparing the candidate test method to a validated test method, the candidate test method must be at least as precise as the validated method as determined by an F test (see Section 11.2.2 of this method).

      10.0 What calculations must I perform for isotopic spiking?

      You must analyze the bias, RSD, precision, and data acceptance for isotopic spiking tests according to the provisions in Sections 10.1 through 10.4 of this method.

      10.1 Numerical Bias. Calculate the numerical value of the bias using the results from the analysis of the isotopic spike in the field samples and the calculated value of the spike according to Equation 301-4.

      Page 12126

      GRAPHIC TIFF OMITTED TR20MR18.003

      Where:

      B = Bias at the spike level.

      Sm = Mean of the measured values of the isotopically-

      labeled analyte in the samples.

      CS = Calculated value of the isotopically-labeled spike level.

      10.2 Standard Deviation. Calculate the standard deviation of the Si values according to Equation 301-5.

      GRAPHIC TIFF OMITTED TR20MR18.004

      Where:

      SD = Standard deviation of the candidate test method.

      Si = Measured value of the isotopically-labeled analyte in the i\th\ field sample.

      Sm = Mean of the measured values of the isotopically-

      labeled analyte in the samples.

      n = Number of isotopically-spiked samples.

      10.3 T Test. Test the bias for statistical significance by calculating the t-statistic using Equation 301-6. Use the standard deviation determined in Section 10.2 of this method and the numerical bias determined in Section 10.1 of this method.

      GRAPHIC TIFF OMITTED TR20MR18.005

      Where:

      t = Calculated t-statistic.

      B = Bias at the spike level.

      SD = Standard deviation of the candidate test method.

      n = Number of isotopically spike samples.

      Compare the calculated t-value with the critical value of the two-

      sided t-distribution at the 95 percent confidence level and n-1 degrees of freedom (see Table 301-3 of this method). When you conduct isotopic spiking according to the procedures specified in Sections 6.1 and 6.3 of this method as required, this critical value is 2.201 for 11 degrees of freedom. If the calculated t-value is less than or equal to the critical value, the bias is not statistically significant, and the bias of the candidate test method is acceptable. If the calculated t-value is greater than the critical value, the bias is statistically significant, and you must evaluate the relative magnitude of the bias using Equation 301-7.

      GRAPHIC TIFF OMITTED TR20MR18.006

      Where:

      BR = Relative bias.

      B = Bias at the spike level.

      CS = Calculated value of the spike level.

      If the relative bias is less than or equal to 10 percent, the bias of the candidate test method is acceptable for use at multiple sources. If the relative bias is greater than 10 percent but less than or equal to 30 percent, and if you correct all data collected with the candidate test method in the future for bias using the source-specific correction factor determined in Equation 301-8, the candidate test method is acceptable only for application to the source at which the validation testing was conducted and may not be applied to any other sites. If either of the preceding two cases applies, you may continue to evaluate the candidate test method by calculating its precision. If not, the candidate test method does not meet the requirements of Method 301.

      GRAPHIC TIFF OMITTED TR20MR18.007

      Where:

      CF = Source-specific bias correction factor.

      B = Bias at the spike level.

      CS = Calculated value of the spike level.

      If the CF is outside the range of 0.70 to 1.30, the data and method are considered unacceptable.

      10.4 Precision. Calculate the RSD according to Equation 301-9.

      Page 12127

      GRAPHIC TIFF OMITTED TR20MR18.008

      Where:

      RSD = Relative standard deviation of the candidate test method.

      SD = Standard deviation of the candidate test method calculated in Equation 301-5.

      Sm = Mean of the measured values of the spike samples.

      The data and candidate test method are unacceptable if the RSD is greater than 20 percent.

      11.0 What calculations must I perform for comparison with a validated method?

      If you are comparing a candidate test method to a validated method, then you must analyze the data according to the provisions in this section. If the data from the candidate test method fail either the bias or precision test, the data and the candidate test method are unacceptable. If the Administrator determines that the affected source has highly variable emission rates, the Administrator may require additional precision checks.

      11.1 Bias Analysis. Test the bias for statistical significance at the 95 percent confidence level by calculating the t-statistic.

      11.1.1 Bias. Determine the bias, which is defined as the mean of the differences between the candidate test method and the validated method (dm). Calculate di according to Equation 301-10.

      GRAPHIC TIFF OMITTED TR20MR18.009

      Where:

      di = Difference in measured value between the candidate test method and the validated method for each quadruplicate sampling train.

      V1i = First measured value with the validated method in the ith quadruplicate sampling train.

      V2i = Second measured value with the validated method in the ith quadruplicate sampling train.

      P1i = First measured value with the candidate test method in the ith quadruplicate sampling train.

      P2i = Second measured value with the candidate test method in the ith quadruplicate sampling train.

      Calculate the numerical value of the bias using Equation 301-11.

      GRAPHIC TIFF OMITTED TR20MR18.010

      Where:

      B = Numerical bias.

      di = Difference between the candidate test method and the validated method for the ith quadruplicate sampling train.

      n = Number of quadruplicate sampling trains.

      11.1.2 Standard Deviation of the Differences. Calculate the standard deviation of the differences, SDd, using Equation 301-12.

      GRAPHIC TIFF OMITTED TR20MR18.011

      Where:

      SDd = Standard deviation of the differences between the candidate test method and the validated method.

      di = Difference in measured value between the candidate test method and the validated method for each quadruplicate sampling train.

      dm = Mean of the differences, di, between the candidate test method and the validated method.

      n = Number of quadruplicate sampling trains.

      11.1.3 T Test. Calculate the t-statistic using Equation 301-13.

      GRAPHIC TIFF OMITTED TR20MR18.012

      Where:

      t = Calculated t-statistic.

      dm = The mean of the differences, di, between the candidate test method and the validated method.

      SDd = Standard deviation of the differences between the candidate test method and the validated method.

      n = Number of quadruplicate sampling trains.

      Page 12128

      For the procedure comparing a candidate test method to a validated test method listed in Table 301-1 of this method, n equals six. Compare the calculated t-statistic with the critical value of the t-statistic, and determine if the bias is significant at the 95 percent confidence level (see Table 301-3 of this method). When six runs are conducted, as specified in Table 301-1 of this method, the critical value of the t-

      statistic is 2.571 for five degrees of freedom. If the calculated t-

      value is less than or equal to the critical value, the bias is not statistically significant and the data are acceptable. If the calculated t-value is greater than the critical value, the bias is statistically significant, and you must evaluate the magnitude of the relative bias using Equation 301-14.

      GRAPHIC TIFF OMITTED TR20MR18.013

      Where:

      BR = Relative bias.

      B = Bias as calculated in Equation 301-11.

      VS = Mean of measured values from the validated method.

      If the relative bias is less than or equal to 10 percent, the bias of the candidate test method is acceptable. On a source-specific basis, if the relative bias is greater than 10 percent but less than or equal to 30 percent, and if you correct all data collected in the future with the candidate test method for the bias using the correction factor, CF, determined in Equation 301-8 (using VS for CS), the bias of the candidate test method is acceptable for application to the source at which the validation testing was conducted. If either of the preceding two cases applies, you may continue to evaluate the candidate test method by calculating its precision. If not, the candidate test method does not meet the requirements of Method 301.

      11.2 Precision. Compare the estimated variance (or standard deviation) of the candidate test method to that of the validated test method according to Sections 11.2.1 and 11.2.2 of this method. If a significant difference is determined using the F test, the candidate test method and the results are rejected. If the F test does not show a significant difference, then the candidate test method has acceptable precision.

      11.2.1 Candidate Test Method Variance. Calculate the estimated variance of the candidate test method according to Equation 301-15.

      GRAPHIC TIFF OMITTED TR20MR18.014

      Where:

      p = Estimated variance of the candidate test method.

      di = The difference between the ith pair of samples collected with the candidate test method in a single quadruplicate train.

      n = Total number of paired samples (quadruplicate trains).

      Calculate the estimated variance of the validated test method according to Equation 301-16.

      GRAPHIC TIFF OMITTED TR20MR18.015

      Where:

      v = Estimated variance of the validated test method.

      di = The difference between the ith pair of samples collected with the validated test method in a single quadruplicate train.

      n = Total number of paired samples (quadruplicate trains).

      11.2.2 The F test. Determine if the estimated variance of the candidate test method is greater than that of the validated method by calculating the F-value using Equation 301-17.

      GRAPHIC TIFF OMITTED TR20MR18.016

      Where:

      F = Calculated F value.

      p = The estimated variance of the candidate test method.

      v = The estimated variance of the validated method.

      Compare the calculated F value with the one-sided confidence level for F from Table 301-4 of this method. The upper one-sided confidence level of 95 percent for F(6,6) is 4.28 when the procedure specified in Table 301-1 of this method for quadruplicate sampling trains is followed. If the calculated F value is greater than the critical F value, the difference in precision is significant, and the data and the candidate test method are unacceptable.

      12.0 What calculations must I perform for analyte spiking?

      You must analyze the data for analyte spike testing according to this section.

      12.1 Bias Analysis. Test the bias for statistical significance at the 95 percent confidence level by calculating the t-statistic.

      Page 12129

      12.1.1 Bias. Determine the bias, which is defined as the mean of the differences between the spiked samples and the unspiked samples in each quadruplicate sampling train minus the spiked amount, using Equation 301-18.

      GRAPHIC TIFF OMITTED TR20MR18.017

      Where:

      di = Difference between the spiked samples and unspiked samples in each quadruplicate sampling train minus the spiked amount.

      S1i = Measured value of the first spiked sample in the ith quadruplicate sampling train.

      S2i = Measured value of the second spiked sample in the ith quadruplicate sampling train.

      M1i = Measured value of the first unspiked sample in the ith quadruplicate sampling train.

      M2i = Measured value of the second unspiked sample in the ith quadruplicate sampling train.

      CS = Calculated value of the spike level.

      Calculate the numerical value of the bias using Equation 301-19.

      GRAPHIC TIFF OMITTED TR20MR18.018

      Where:

      B = Numerical value of the bias.

      di = Difference between the spiked samples and unspiked samples in each quadruplicate sampling train minus the spiked amount.

      n = Number of quadruplicate sampling trains.

      12.1.2 Standard Deviation of the Differences. Calculate the standard deviation of the differences using Equation 301-20.

      GRAPHIC TIFF OMITTED TR20MR18.019

      Where:

      SDd = Standard deviation of the differences of paired samples.

      di = Difference between the spiked samples and unspiked samples in each quadruplicate sampling train minus the spiked amount.

      dm = The mean of the differences, di, between the spiked samples and unspiked samples.

      n = Total number of quadruplicate sampling trains.

      12.1.3 T Test. Calculate the t-statistic using Equation 301-21, where n is the total number of test sample differences (di). For the quadruplicate sampling system procedure in Table 301-1 of this method, n equals six.

      GRAPHIC TIFF OMITTED TR20MR18.020

      Where:

      t = Calculated t-statistic.

      dm = Mean of the difference, di, between the spiked samples and unspiked samples.

      SDd = Standard deviation of the differences of paired samples.

      n = Number of quadruplicate sampling trains.

      Compare the calculated t-statistic with the critical value of the t-statistic, and determine if the bias is significant at the 95 percent confidence level. When six quadruplicate runs are conducted, as specified in Table 301-1 of this method, the 2-sided confidence level critical value is 2.571 for the five degrees of freedom. If the calculated t-value is less than the critical value, the bias is not statistically significant and the data are acceptable. If the calculated t-value is greater than the critical value, the bias is statistically significant and you must evaluate the magnitude of the relative bias using Equation 301-22.

      GRAPHIC TIFF OMITTED TR20MR18.021

      Where:

      BR = Relative bias.

      B = Bias at the spike level from Equation 301-19.

      CS = Calculated value at the spike level.

      If the relative bias is less than or equal to 10 percent, the bias of the candidate test method is acceptable. On a source-

      Page 12130

      specific basis, if the relative bias is greater than 10 percent but less than or equal to 30 percent, and if you correct all data collected with the candidate test method in the future for the magnitude of the bias using Equation 301-8, the bias of the candidate test method is acceptable for application to the tested source at which the validation testing was conducted. Proceed to evaluate precision of the candidate test method.

      12.2 Precision. Calculate the standard deviation using Equation 301-23.

      GRAPHIC TIFF OMITTED TR20MR18.022

      Where:

      SD = Standard deviation of the candidate test method.

      Si = Measured value of the analyte in the ith spiked sample.

      Sm = Mean of the measured values of the analyte in all the spiked samples.

      n = Number of spiked samples.

      Calculate the RSD of the candidate test method using Equation 301-

      9, where SD and Sm are the values from Equation 301-23. The data and candidate test method are unacceptable if the RSD is greater than 20 percent.

      13.0 How do I conduct tests at similar sources?

      If the Administrator has approved the use of an alternative test method to a test method required in 40 CFR part 59, 60, 61, 63, or 65 for an affected source, and you would like to apply the alternative test method to a similar source, then you must petition the Administrator as described in Section 17.1.1 of this method.

      Optional Requirements

      14.0 How do I use and conduct ruggedness testing?

      Ruggedness testing is an optional requirement for validation of a candidate test method that is intended for the source where the validation testing was conducted. Ruggedness testing is required for validation of a candidate test method intended to be used at multiple sources. If you want to use a validated test method at a concentration that is different from the concentration in the applicable emission limitation under 40 CFR part 59, 60, 61, 63, or 65, or for a source category that is different from the source category that the test method specifies, then you must conduct ruggedness testing according to the procedures in Reference 18.16 of Section 18.0 of this method and submit a request for a waiver for conducting Method 301 at that different source category according to Section 17.1.1 of this method.

      Ruggedness testing is a study that can be conducted in the laboratory or the field to determine the sensitivity of a method to parameters such as analyte concentration, sample collection rate, interferent concentration, collection medium temperature, and sample recovery temperature. You conduct ruggedness testing by changing several variables simultaneously instead of changing one variable at a time. For example, you can determine the effect of seven variables in only eight experiments. (W.J. Youden, Statistical Manual of the Association of Official Analytical Chemists, Association of Official Analytical Chemists, Washington, DC, 1975, pp. 33-36).

      15.0 How do I determine the Limit of Detection for the candidate test method?

      Determination of the Limit of Detection (LOD) as specified in Sections 15.1 and 15.2 of this method is required for source-specific method validation and validation of a candidate test method intended to be used for multiple sources.

      15.1 Limit of Detection. The LOD is the minimum concentration of a substance that can be measured and reported with 99 percent confidence that the analyte concentration is greater than zero. For this protocol, the LOD is defined as three times the standard deviation, So, at the blank level.

      15.2 Purpose. The LOD establishes the lower detection limit of the candidate test method. You must calculate the LOD using the applicable procedures found in Table 301-5 of this method. For candidate test methods that collect the analyte in a sample matrix prior to an analytical measurement, you must determine the LOD using Procedure I in Table 301-5 of this method by calculating a method detection limit (MDL) as described in 40 CFR part 136, appendix B. For the purposes of this section, the LOD is equivalent to the calculated MDL. For radiochemical methods, use the Multi-Agency Radiological Laboratory Analytical Protocols (MARLAP) Manual (i.e., use the minimum detectable concentration (MDC) and not the LOD) available at https://www.epa.gov/radiation/marlap-manual-and-supporting-documents.

      Other Requirements and Information

      16.0 How do I apply for approval to use a candidate test method?

      16.1 Submitting Requests. You must request to use a candidate test method according to the procedures in Sec. 63.7(f) or similar sections of 40 CFR parts 59, 60, 61, and 65 (Sec. 59.104, Sec. 59.406, Sec. 60.8(b), Sec. 61.13(h)(1)(ii), or Sec. 65.158(a)(2)(iii)). You cannot use a candidate test method to meet any requirement under these parts until the Administrator has approved your request. The request must include a field validation report containing the information in Section 16.2 of this method. You must submit the request to the Group Leader, Measurement Technology Group, U.S. Environmental Protection Agency, E143-02, Research Triangle Park, NC 27711.

      16.2 Field Validation Report. The field validation report must contain the information in Sections 16.2.1 through 16.2.8 of this method.

      16.2.1 Regulatory objectives for the testing, including a description of the reasons for the test, applicable emission limits, and a description of the source.

      16.2.2 Summary of the results and calculations shown in Sections 6.0 through 16.0 of this method, as applicable.

      16.2.3 Reference material certification and value(s).

      16.2.4 Discussion of laboratory evaluations.

      16.2.5 Discussion of field sampling.

      16.2.6 Discussion of sample preparation and analysis.

      16.2.7 Storage times of samples (and extracts, if applicable).

      16.2.8 Reasons for eliminating any results.

      17.0 How do I request a waiver?

      17.1 Conditions for Waivers. If you meet one of the criteria in Section 17.1.1 or 17.1.2 of this method, the Administrator may waive the requirement to use the procedures in this method to validate an alternative or

      Page 12131

      other candidate test method. In addition, if the EPA currently recognizes an appropriate test method or considers the candidate test method to be satisfactory for a particular source, the Administrator may waive the use of this protocol or may specify a less rigorous validation procedure.

      17.1.1 Similar Sources. If the alternative or other candidate test method that you want to use was validated for source-specific application at another source and you can demonstrate to the Administrator's satisfaction that your affected source is similar to that validated source, then the Administrator may waive the requirement for you to validate the alternative or other candidate test method. One procedure you may use to demonstrate the applicability of the method to your affected source is to conduct a ruggedness test as described in Section 14.0 of this method.

      17.1.2 Documented Methods. If the bias, precision, LOD, or ruggedness of the alternative or other candidate test method that you are proposing have been demonstrated through laboratory tests or protocols different from this method, and you can demonstrate to the Administrator's satisfaction that the bias, precision, LOD, or ruggedness apply to your application, then the Administrator may waive the requirement to use this method or to use part of this method.

      17.2 Submitting Applications for Waivers. You must sign and submit each request for a waiver from the requirements in this method in writing. The request must be submitted to the Group Leader, Measurement Technology Group, U.S. Environmental Protection Agency, E143-02, Research Triangle Park, NC 27711.

      17.3 Information Application for Waiver. The request for a waiver must contain a thorough description of the candidate test method, the intended application, and results of any validation or other supporting documents. The request for a waiver must contain, at a minimum, the information in Sections 17.3.1 through 17.3.4 of this method. The Administrator may request additional information if necessary to determine whether this method can be waived for a particular application.

      17.3.1 A Clearly Written Test Method. The candidate test method should be written preferably in the format of 40 CFR part 60, appendix A, Test Methods. Additionally, the candidate test must include an applicability statement, concentration range, precision, bias (accuracy), and minimum and maximum storage durations in which samples must be analyzed.

      17.3.2 Summaries of Previous Validation Tests or Other Supporting Documents. If you use a different procedure from that described in this method, you must submit documents substantiating the bias and precision values to the Administrator's satisfaction.

      17.3.3 Ruggedness Testing Results. You must submit results of ruggedness testing conducted according to Section 14.0 of this method, sample stability conducted according to Section 7.0 of this method, and detection limits conducted according to Section 15.0 of this method, as applicable. For example, you would not need to submit ruggedness testing results if you will be using the method at the same affected source and level at which it was validated.

      17.3.4 Applicability Statement and Basis for Waiver Approval. Discussion of the applicability statement and basis for approval of the waiver. This discussion should address as applicable the following: applicable regulation, emission standards, effluent characteristics, and process operations.

      18.0 Where can I find additional information?

      You can find additional information in the references in Sections 18.1 through 18.18 of this method.

      18.1 Albritton, J.R., G.B. Howe, S.B. Tompkins, R.K.M. Jayanty, and C.E. Decker. 1989. Stability of Parts-Per-Million Organic Cylinder Gases and Results of Source Test Analysis Audits, Status Report No. 11. Environmental Protection Agency Contract 68-02-4125. Research Triangle Institute, Research Triangle Park, NC. September.

      18.2 ASTM Standard E 1169-89 (current version), ``Standard Guide for Conducting Ruggedness Tests,'' available from ASTM, 100 Barr Harbor Drive, West Conshohoken, PA 19428.

      18.3 DeWees, W.G., P.M. Grohse, K.K. Luk, and F.E. Butler. 1989. Laboratory and Field Evaluation of a Methodology for Speciating Nickel Emissions from Stationary Sources. EPA Contract 68-02-4442. Prepared for Atmospheric Research and Environmental Assessment Laboratory, Office of Research and Development, U.S. Environmental Protection Agency, Research Triangle Park, NC 27711. January.

      18.4 International Conference on Harmonization of Technical Requirements for the Registration of Pharmaceuticals for Human Use, ICH-Q2A, ``Text on Validation of Analytical Procedures,'' 60 FR 11260 (March 1995).

      18.5 International Conference on Harmonization of Technical Requirements for the Registration of Pharmaceuticals for Human Use, ICH-Q2b, ``Validation of Analytical Procedures: Methodology,'' 62 FR 27464 (May 1997).

      18.6 Keith, L.H., W. Crummer, J. Deegan Jr., R.A. Libby, J.K. Taylor, and G. Wentler. 1983. Principles of Environmental Analysis. American Chemical Society, Washington, DC.

      18.7 Maxwell, E.A. 1974. Estimating variances from one or two measurements on each sample. Amer. Statistician 28:96-97.

      18.8 Midgett, M.R. 1977. How EPA Validates NSPS Methodology. Environ. Sci. & Technol. 11(7):655-659.

      18.9 Mitchell, W.J., and M.R. Midgett. 1976. Means to evaluate performance of stationary source test methods. Environ. Sci. & Technol. 10:85-88.

      18.10 Plackett, R.L., and J.P. Burman. 1946. The design of optimum multifactorial experiments. Biometrika, 33:305.

      18.11 Taylor, J.K. 1987. Quality Assurance of Chemical Measurements. Lewis Publishers, Inc., pp. 79-81.

      18.12 U.S. Environmental Protection Agency. 1978. Quality Assurance Handbook for Air Pollution Measurement Systems: Volume III. Stationary Source Specific Methods. Publication No. EPA-600/4-77-

      027b. Office of Research and Development Publications, 26 West St. Clair St., Cincinnati, OH 45268.

      18.13 U.S. Environmental Protection Agency. 1981. A Procedure for Establishing Traceability of Gas Mixtures to Certain National Bureau of Standards Standard Reference Materials. Publication No. EPA-600/

      7-81-010. Available from the U.S. EPA, Quality Assurance Division (MD-77), Research Triangle Park, NC 27711.

      18.14 U.S. Environmental Protection Agency. 1991. Protocol for The Field Validation of Emission Concentrations from Stationary Sources. Publication No. 450/4-90-015. Available from the U.S. EPA, Emission Measurement Technical Information Center, Technical Support Division (MD-14), Research Triangle Park, NC 27711.

      18.15 Wernimont, G.T., ``Use of Statistics to Develop and Evaluate Analytical Methods,'' AOAC, 1111 North 19th Street, Suite 210, Arlington, VA 22209, USA, 78-82 (1987).

      18.16 Youden, W.J. Statistical techniques for collaborative tests. In: Statistical Manual of the Association of Official Analytical Chemists, Association of Official Analytical Chemists, Washington, DC, 1975, pp. 33-36.

      18.17 NIST/SEMATECH (current version), ``e-Handbook of Statistical Methods,'' available from NIST, http://www.itl.nist.gov/div898/handbook/.

      18.18 Statistical Table, http://www.math.usask.ca/~szafron/Stats244/

      f_table_0_05.pdf.

      19.0 Tables.

      Page 12132

      Table 301-1--Sampling Procedures

      ------------------------------------------------------------------------

      If you are . . . You must collect . . .

      ------------------------------------------------------------------------

      Comparing the candidate test method A total of 24 samples using a

      against a validated method. quadruplicate sampling system

      (a total of six sets of

      replicate samples). In each

      quadruplicate sample set, you

      must use the validated test

      method to collect and analyze

      half of the samples.

      Using isotopic spiking (can only be A total of 12 samples, all of

      used with methods capable of which are spiked with

      measurement of multiple isotopes isotopically-labeled analyte.

      simultaneously). You may collect the samples

      either by obtaining six sets

      of paired samples or three

      sets of quadruplicate samples.

      Using analyte spiking.................. A total of 24 samples using the

      quadruplicate sampling system

      (a total of six sets of

      replicate samples--two spiked

      and two unspiked).

      ------------------------------------------------------------------------

      Table 301-2--Storage and Sampling Procedures for Stack Test Emissions

      ------------------------------------------------------------------------

      If you are . . . With . . . Then you must . . .

      ------------------------------------------------------------------------

      Using isotopic or analyte Sample container Analyze six of the

      spiking procedures. (bag or samples within 7

      canister) or days and then

      impinger analyze the same six

      sampling systems samples at the

      that are not proposed maximum

      subject to storage duration or

      dilution or 2 weeks after the

      other initial analysis.

      preparation

      steps.

      Sorbent and Extract or digest six

      impinger of the samples

      sampling systems within 7 days and

      that require extract or digest

      extraction or six other samples at

      digestion. the proposed maximum

      storage duration or

      2 weeks after the

      first extraction or

      digestion. Analyze

      an aliquot of the

      first six extracts

      (digestates) within

      7 days and proposed

      maximum storage

      duration or 2 weeks

      after the initial

      analysis. This will

      allow analysis of

      extract storage

      impacts.

      Sorbent sampling Analyze six samples

      systems that within 7 days.

      require thermal Analyze another set

      desorption. of six samples at

      the proposed maximum

      storage time or

      within 2 weeks of

      the initial

      analysis.

      Comparing a candidate test Sample container Analyze at least six

      method against a validated (bag or of the candidate

      test method. canister) or test method samples

      impinger within 7 days and

      sampling systems then analyze the

      that are not same six samples at

      subject to the proposed maximum

      dilution or storage duration or

      other within 2 weeks of

      preparation the initial

      steps. analysis.

      Sorbent and Extract or digest six

      impinger of the candidate

      sampling systems test method samples

      that require within 7 days and

      extraction or extract or digest

      digestion. six other samples at

      the proposed maximum

      storage duration or

      within 2 weeks of

      the first extraction

      or digestion.

      Analyze an aliquot

      of the first six

      extracts

      (digestates) within

      7 days and an

      aliquot at the

      proposed maximum

      storage durations or

      within 2 weeks of

      the initial

      analysis. This will

      allow analysis of

      extract storage

      impacts.

      Sorbent systems Analyze six samples

      that require within 7 days.

      thermal Analyze another set

      desorption. of six samples at

      the proposed maximum

      storage duration or

      within 2 weeks of

      the initial

      analysis.

      ------------------------------------------------------------------------

      Table 301-3--Critical Values of t for the Two-Tailed 95 Percent

      Confidence Limit \1\

      ------------------------------------------------------------------------

      Degrees of freedom t95

      ------------------------------------------------------------------------

    3. 12.706

    4. 4.303

    5. 3.182

    6. 2.776

    7. 2.571

    8. 2.447

    9. 2.365

    10. 2.306

    11. 2.262

    12. 2.228

    13. 2.201

    14. 2.179

    15. 2.160

    16. 2.145

    17. 2.131

    18. 2.120

    19. 2.110

    20. 2.101

    21. 2.093

    22. 2.086

      ------------------------------------------------------------------------

      \1\ Adapted from Reference 18.17 in section 18.0.

      Page 12133

      Table 301-4--Upper Critical Values of the F Distribution for the 95 Percent Confidence Limit \1\

      ----------------------------------------------------------------------------------------------------------------

      Numerator (k1) and denominator (k2) degrees of freedom F{F>F.05(k1,k2){time}

      ----------------------------------------------------------------------------------------------------------------

      1,1......................................................................... 161.40

      2,2......................................................................... 19.00

      3,3......................................................................... 9.28

      4,4......................................................................... 6.39

      5,5......................................................................... 5.05

      6,6......................................................................... 4.28

      7,7......................................................................... 3.79

      8,8......................................................................... 3.44

      9,9......................................................................... 3.18

      10,10....................................................................... 2.98

      11,11....................................................................... 2.82

      12,12....................................................................... 2.69

      13,13....................................................................... 2.58

      14,14....................................................................... 2.48

      15,15....................................................................... 2.40

      16,16....................................................................... 2.33

      17,17....................................................................... 2.27

      18,18....................................................................... 2.22

      19,19....................................................................... 2.17

      20,20....................................................................... 2.12

      ----------------------------------------------------------------------------------------------------------------

      \1\ Adapted from References 18.17 and 18.18 in section 18.0.

      Table 301-5--Procedures for Estimating So

      ------------------------------------------------------------------------

      ------------------------------------------------------------------------

      If the estimated LOD (LOD1, expected If the estimated LOD (LOD1,

      approximate LOD concentration level) expected approximate LOD

      is no more than twice the calculated concentration level) is

      LOD or an analyte in a sample matrix greater than twice the

      was collected prior to an analytical calculated LOD, use Procedure

      measurement, use Procedure I as II as follows.

      follows.

      Procedure I: Procedure II:

      Determine the LOD by calculating a Prepare two additional

      method detection limit (MDL) as standards (LOD2 and LOD3)

      described in 40 CFR part 136, at concentration levels

      appendix B. lower than the standard

      used in Procedure I (LOD1).

      Sample and analyze each of

      these standards (LOD2 and

      LOD3) at least seven times.

      Calculate the standard

      deviation (S2 and S3) for

      each concentration level.

      Plot the standard deviations

      of the three test standards

      (S1, S2 and S3) as a

      function of concentration.

      Draw a best-fit straight

      line through the data

      points and extrapolate to

      zero concentration. The

      standard deviation at zero

      concentration is So.

      Calculate the LOD0 (referred

      to as the calculated LOD)

      as 3 times So.

      ------------------------------------------------------------------------

      * * * * *

      FR Doc. 2018-05400 Filed 3-19-18; 8:45 am

      BILLING CODE 6560-50-P

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT