﻿<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD with MathML3 v1.2 20190208//EN" "http://dtd.nlm.nih.gov/publishing/3.0/journalpublishing3.dtd">
<article
    xmlns:mml="http://www.w3.org/1998/Math/MathML"
    xmlns:xlink="http://www.w3.org/1999/xlink" dtd-version="3.0" xml:lang="en" article-type="article">
  <front>
    <journal-meta>
      <journal-id journal-id-type="publisher-id">JML</journal-id>
      <journal-title-group>
        <journal-title>Journal of Mathematics Letters</journal-title>
      </journal-title-group>
      <issn pub-type="epub"></issn>
      <issn pub-type="ppub"></issn>
      <publisher>
        <publisher-name>Science Publications</publisher-name>
      </publisher>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.31586/jml.2023.618</article-id>
      <article-id pub-id-type="publisher-id">JML-618</article-id>
      <article-categories>
        <subj-group subj-group-type="heading">
          <subject>Article</subject>
        </subj-group>
      </article-categories>
      <title-group>
        <article-title>
          The Efficiency of the Proposed Smoothing Method over the Classical Cubic Smoothing Spline Regression Model with Autocorrelated Residual
        </article-title>
      </title-group>
      <contrib-group>
<contrib contrib-type="author">
<name>
<surname>Adams</surname>
<given-names>Samuel Olorunfemi</given-names>
</name>
<xref rid="af1" ref-type="aff">1</xref>
<xref rid="cr1" ref-type="corresp">*</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Asemota</surname>
<given-names>Omorogbe Joseph</given-names>
</name>
<xref rid="af2" ref-type="aff">2</xref>
</contrib>
      </contrib-group>
<aff id="af1"><label>1</label> Department of Statistics, University of Abuja, Abuja, Nigeria</aff>
<aff id="af2"><label>2</label> Department of Economic and Social Research, National Institute for Legislative and Democratic Studies, Abuja, Nigeria</aff>
<author-notes>
<corresp id="c1">
<label>*</label>Corresponding author at: Department of Statistics, University of Abuja, Abuja, Nigeria
</corresp>
</author-notes>
      <pub-date pub-type="epub">
        <day>18</day>
        <month>03</month>
        <year>2023</year>
      </pub-date>
      <volume>1</volume>
      <issue>1</issue>
      <history>
        <date date-type="received">
          <day>18</day>
          <month>03</month>
          <year>2023</year>
        </date>
        <date date-type="rev-recd">
          <day>18</day>
          <month>03</month>
          <year>2023</year>
        </date>
        <date date-type="accepted">
          <day>18</day>
          <month>03</month>
          <year>2023</year>
        </date>
        <date date-type="pub">
          <day>18</day>
          <month>03</month>
          <year>2023</year>
        </date>
      </history>
      <permissions>
        <copyright-statement>&#xa9; Copyright 2023 by authors and Trend Research Publishing Inc. </copyright-statement>
        <copyright-year>2023</copyright-year>
        <license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
          <license-p>This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/licenses/by/4.0/</license-p>
        </license>
      </permissions>
      <abstract>
        Spline smoothing is a technique used to filter out noise in time series observations when predicting nonparametric regression models. Its performance depends on the choice of the smoothing parameter. Most of the existing smoothing methods applied to time series data tend to over fit in the presence of autocorrelated errors. This study aims to determine the optimum performance value, goodness of fit and model overfitting properties of the proposed Smoothing Method (PSM), Generalized Maximum Likelihood (GML), Generalized Cross-Validation (GCV), and Unbiased Risk (UBR) smoothing parameter selection methods. A Monte Carlo experiment of 1,000 trials was carried out at three different sample sizes (20, 60, and 100) and three levels of autocorrelation (0.2, 05, and 0.8). The four smoothing methods' performances were estimated and compared using the Predictive Mean Squared Error (PMSE) criterion. The findings of the study revealed that: for a time series observation with autocorrelated errors, provides the best-fit smoothing method for the model, the PSM does not over-fit data at all the autocorrelation levels considered ( the optimum value of the PSM was at the weighted value of 0.04 when there is autocorrelation in the error term, PSM performed better than the GCV, GML, and UBR smoothing methods were considered at all-time series sizes (T = 20, 60 and 100). For the real-life data employed in the study, PSM proved to be the most efficient among the GCV, GML, PSM, and UBR smoothing methods compared. The study concluded that the PSM method provides the best fit as a smoothing method, works well at autocorrelation levels (&#x003c1;=0.2, 0.5, and 0.8), and does not over fit time-series observations. The study recommended that the proposed smoothing is appropriate for time series observations with autocorrelation in the error term and econometrics real-life data. This study can be applied to; non &#x02013; parametric regression, non &#x02013; parametric forecasting, spatial, survival, and econometrics observations.
      </abstract>
      <kwd-group>
        <kwd-group><kwd>Cubic spline; goodness of fit; Generalized Maximum Likelihood (GML); Generalized Cross-Validation (GCV); and Mallow CP criterion (MCP)</kwd>
</kwd-group>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec1">
<title>Introduction</title><p>The smoothing spline is a spline consisting of piecewise third-request polynomials that go through a bunch of control focuses. The second subsidiary of every polynomial is ordinarily set to zero at the endpoints since this gives a limit condition that finishes the framework condition of <math><semantics><mrow><mi>m</mi><mo>-</mo><mn>2</mn></mrow></semantics></math> conditions. This creates a purported "normal" cubic spline and prompts a straightforward tri-diagonal framework that can be settled effectively to give the coefficients of the polynomials. The parameters are estimated by minimizing the residual sum of squares (RSS) and a roughness penalty. A general test of &#x26;#x0201c;loyalty to observation" for a curve g is the residual sum of squares. If <italic>g</italic> is allowed to be any curve &#x26;#x02013; unrestricted in functional form, then this distance test can be reduced to zero by any<italic> g </italic>that interpolates the observation. The curve would not be admitted because it is not exclusive and because it is a structure-oriented interpretation, [
<xref ref-type="bibr" rid="R1">1</xref>]. The spline smoothing approach avoids impossible interpolation of the observation by evaluating the contest between the tasks of producing a good fit to the observation and producing a curve without too much rapid local change. The main function of splines is for interpolation, but they can also be used for parametric and non-parametric regression modeling. The most commonly used spline smoothing technique is the cubic splines. Spline smoothing produces another technique for local polynomial regression and it is also a charming component of additives regression models. It is well known that correlation greatly affects the selection of smoothing parameter which is critical to the performance of smoothing spline. The commonly used approach in time series analysis is the classical ARMA method. It assumes linear dependence on past values and past innovations. Generalized Cross Validation (GCV) and Generalized Maximum Likelihood (GML) are the most appropriate spline smoothing method for selecting optimal value for the smoothing parameter and a performance criteria for smoothing parameter selection. So many scholars had carried out research on this area, most of them discovered that time series data assume independence of regressors and error terms which lead to autocorrelation problem. The application of smoothing parameter estimators like GCV and GML do not always solve these problems because they don&#x26;#x02019;t occasionally smooth. </p>
<p>Over the last two decades, research on spline smoothing estimation methods has produced a vast amount of information and discoveries from researchers in evaluating the efficiency and performance of the existing estimation techniques when autocorrelation is present in their error terms. In this research work, the proposed smoothing method is compared with three classical smoothing spline parameter selection techniques with the intention of providing a robust smoothing parameter estimation method that will alleviate the problem of over fitting models for time-series data with low, moderate, and high autocorrelation levels and the problem associated with the smoothing methods&#x26;#x02019; performance when different time series sample sizes are utilized.</p>
<p>In Section 2, the cubic smoothing spline was discussed, method of selecting smoothing parameters like Generalized Cross-Validation, Generalized maximum Likelihood, Mallow&#x26;#x02019;s CP criterion, and performance evaluation criteria were also discussed in this section. A simulation study and results are given in Sections. Finally, concluding remarks are presented in Section 4.</p>
</sec><sec id="sec2">
<title>Literature Review</title><p>A lot of attention has been directed to studies on smoothing smoothing with autocorrelated error. [
<xref ref-type="bibr" rid="R2">2</xref>] made a comparison between GCV and REML, it was recommended that GCV and REML are good smoothing parameter selection for small and medium-sized samples. [
<xref ref-type="bibr" rid="R3">3</xref>,<xref ref-type="bibr" rid="R4">4</xref>] applied the smoothing spline method to fit a curve to a noisy data set, where the selection of the smoothing parameter is essential. An improved <italic>C</italic><sub><italic>p</italic></sub> criterion for spline smoothing based on Stein&#x26;#x02019;s unbiased risk estimate has been proposed to select the smoothing parameter. The resulting fitted curve is superior and more stable than commonly used selection criteria and possesses the same asymptotic optimality as <italic>C</italic><sub><italic>p.</italic></sub> [
<xref ref-type="bibr" rid="R5">5</xref>] applied most of the data-driven smoothing parameter selection methods and compared them based on large and small sample sizes. The parallel of Akaike&#x26;#x02019;s information criterion and Generalized Cross-Validation is recommended as being the best selection criteria. For large samples, the <italic>GF</italic><sub><italic>AIC</italic></sub><italic> </italic>method would seem to be more appropriate while for small samples they proposed the implementation of the GCV criterion. [
<xref ref-type="bibr" rid="R6">6</xref>] investigates two types of results that support the use of GCV for variable selection under the assumption of sparsity. The first type of result is based on the well-established links between GCV on the one hand and Mallows&#x26;#x02019;s Cp and Stein Unbiased Risk Estimator on the other hand. The result states that GCV performs as well as Cp or SURE in a regularized or penalized least squares problem as an estimator of the prediction error for the penalty in the neighborhood of its optimal value. [
<xref ref-type="bibr" rid="R7">7</xref>,<xref ref-type="bibr" rid="R8">8</xref>,<xref ref-type="bibr" rid="R9">9</xref>] investigated the behavior of the optimal values of gamma and rho to identify simple practical rules to choose their optimal properties. RGCV and modified GCV perform significantly better than GCV. The performance is defined in terms of the Sobolev error, which is shown by example to be more consistent with a visual assessment of the fit than the average squared error. [
<xref ref-type="bibr" rid="R10">10</xref>,<xref ref-type="bibr" rid="R11">11</xref>,<xref ref-type="bibr" rid="R12">12</xref>] discussed UBR and GCV for selecting the optimal knots in spline. The criteria for selecting the best model were based on Mean Squared Error and R-square. The simulation was performed on a spline truncated function with error generated from a Normal distribution for varied sample sizes and error variance. The results of the simulation study showed that GCV estimates the knots more accurately than UBR. [
<xref ref-type="bibr" rid="R13">13</xref>] considered nonparametric regression problems and developed a model-averaging procedure for smoothing spline regression problems. Model weights were estimated using a delete-one-out cross-validation procedure to minimize the prediction error. A simulation study was performed by using a program written in R. The simulation study provides a comparison of the most well-known CV, generalized GCV, and the proposed method. The model averaging approach is straightforward to implement and gives reliable performances in simulations.</p>
<p>It is clear from the existing literature that the goodness-of-fit of smoothing spline for time series observations has not been investigated so far. The paper aimed at presenting a goodness of fit test for time series observation using three classical cubic spline non-parametric regression functions. </p>
</sec><sec id="sec3">
<title>Methodology</title><p>This section discussed the methodology applied in this research work. </p>
<title>3.1. Cubic Smoothing Spline Regression Model</title><p>The most common example of the smoothing spline is the cubic spline; it is the smoothing spline's functional form and a piecewise cubic function that interpolates the dataset and ensures the smoothness of the observation. It is piecewise third-request polynomials that go through a bunch of focuses. It has a nonstop first and second subordinate with the request for the coherence of (d&#x26;#x02013;1), where d is the polynomial degree. The Model with shortened force premise work b(x) changes the factors Xi by applying a premise work b(x) and fits a model utilizing these changed factors, which adds non-linearity to the model and empowers the splines to fit smoother and adaptable Non-straight capacities. The spline smoothing model is written as;</p>

<disp-formula id="FD1"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><msub><mrow><mi>y</mi></mrow><mrow><mi>i</mi></mrow></msub><mo>=</mo><mi>f</mi><mfenced separators="|"><mrow><msub><mrow><mi>t</mi></mrow><mrow><mi>i</mi></mrow></msub></mrow></mfenced><mo>+</mo><msub><mrow><mi>ε</mi></mrow><mrow><mi>i</mi></mrow></msub></mrow></semantics></math></div><div class="l"><label>(1)</label></div></div></disp-formula><p>Where; <math><semantics><mrow><msub><mrow><mi>y</mi></mrow><mrow><mi>i</mi></mrow></msub></mrow></semantics></math> is the response variable, <math><semantics><mrow><mi>f</mi></mrow></semantics></math> is an unknown smoothing function, <math><semantics><mrow><msub><mrow><mi>t</mi></mrow><mrow><mi>i</mi></mrow></msub></mrow></semantics></math> is the independent/predictor variable and <math><semantics><mrow><msub><mrow><mi>ε</mi></mrow><mrow><mi>i</mi></mrow></msub></mrow></semantics></math> is zero means autocorrelated stationary process. </p>
<p>The general cubic spline function is given as;</p>

<disp-formula id="FD2"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><mi>f</mi><mfenced separators="|"><mrow><mi>t</mi></mrow></mfenced><mo>=</mo><mi mathvariant="normal"> </mi><mi>a</mi><msup><mrow><mi>t</mi></mrow><mrow><mn>3</mn></mrow></msup><mo>+</mo><mi>b</mi><msup><mrow><mi>t</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>+</mo><mi>c</mi><mi>t</mi><mo>+</mo><mi>d</mi><mo>+</mo><mi>ε</mi></mrow></semantics></math></div><div class="l"><label>(2)</label></div></div></disp-formula><p>Where; , b, c, and d = real number coefficients and <math><semantics><mrow><mi>a</mi><mo>≠</mo><mn>0</mn></mrow></semantics></math>, t = independent variable, <math><semantics><mrow><mi>ε</mi></mrow></semantics></math> = error term, and d.f. = k-d-1 (k = number of knots and d = degree of cubic spline)</p>
<p>The cubic spline smoothing estimate function is <math><semantics><mrow><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover></mrow></semantics></math> while; <math><semantics><mrow><mi>f</mi></mrow></semantics></math> refers to the minimizer of a twice differentiable function of; </p>

<disp-formula id="FD3"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><mi>S</mi><mfenced separators="|"><mrow><mi>f</mi></mrow></mfenced><mi mathvariant="normal"> </mi><mo>=</mo><mi mathvariant="normal"> </mi><mrow><munderover><mo stretchy="false">∑</mo><mrow><mi>i</mi><mo>-</mo><mn>1</mn></mrow><mrow><mi>n</mi></mrow></munderover><mrow><msup><mrow><mfenced separators="|"><mrow><msub><mrow><mi>y</mi></mrow><mrow><mi>i</mi></mrow></msub><mi mathvariant="normal"> </mi><mo>-</mo><mi mathvariant="normal"> </mi><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover><mfenced separators="|"><mrow><msub><mrow><mi>t</mi></mrow><mrow><mi>i</mi></mrow></msub></mrow></mfenced></mrow></mfenced></mrow><mrow><mn>2</mn></mrow></msup><mi mathvariant="normal"> </mi><mo>+</mo><mi mathvariant="normal"> </mi><mi>λ</mi><mrow><msubsup><mo stretchy="false">∫</mo><mrow><mi>a</mi></mrow><mrow><mi>b</mi></mrow></msubsup><mrow><msup><mrow><mfenced separators="|"><mrow><msup><mrow><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover></mrow><mrow><mi mathvariant="normal">'</mi><mi mathvariant="normal">'</mi></mrow></msup><mfenced separators="|"><mrow><mi>t</mi></mrow></mfenced></mrow></mfenced></mrow><mrow><mn>2</mn></mrow></msup><mi>d</mi><mi>t</mi></mrow></mrow></mrow></mrow></mrow></semantics></math></div><div class="l"><label>(3)</label></div></div></disp-formula><p>Where;</p>
<p> is a smoothing parameter,</p>
<p>The initial part in equation (3) refers to the residual sum of the square for the integrity of the information's attack.</p>
<p>The roughness penalty in the subsequent term of equation (3), is enormous when the incorporated second subsidiary of a regression function <math><semantics><mrow><msup><mrow><mi>f</mi></mrow><mrow><mi>'</mi><mi>'</mi></mrow></msup><mfenced separators="|"><mrow><mi>t</mi></mrow></mfenced></mrow></semantics></math> is likewise huge </p>
<p>If &#x26;#x003bb; moves toward 0, then <math><semantics><mrow><mi>f</mi><mfenced separators="|"><mrow><mi>t</mi></mrow></mfenced></mrow></semantics></math> only interpolates the data set.</p>
<p>If &#x26;#x003bb; is very big, then <math><semantics><mrow><mi>f</mi><mfenced separators="|"><mrow><mi>t</mi></mrow></mfenced></mrow></semantics></math> would be chosen wherefore <math><semantics><mrow><msup><mrow><mi>f</mi></mrow><mrow><mi>'</mi><mi>'</mi></mrow></msup><mfenced separators="|"><mrow><mi>t</mi></mrow></mfenced></mrow></semantics></math> is wherever 0, which will suggest a by and large direct least-squares fit the perceptions.</p>
<p></p>
<p>If <math><semantics><mrow><mi>f</mi><mfenced separators="|"><mrow><mi>t</mi></mrow></mfenced></mrow></semantics></math> values are fixed at <math><semantics><mrow><mi>f</mi><mfenced separators="|"><mrow><msub><mrow><mi>t</mi></mrow><mrow><mn>1</mn></mrow></msub></mrow></mfenced><mo>,</mo><mi> </mi><mi> </mi><mo>.</mo><mi> </mi><mo>.</mo><mi> </mi><mo>.</mo><mi> </mi><mo>.</mo><mi> </mi><mo>,</mo><mi> </mi><mi>f</mi><mfenced separators="|"><mrow><msub><mrow><mi>t</mi></mrow><mrow><mn>2</mn></mrow></msub></mrow></mfenced></mrow></semantics></math> the roughness <math><semantics><mrow><mrow><msubsup><mo stretchy="false">∫</mo><mrow><mi>a</mi></mrow><mrow><mi>b</mi></mrow></msubsup><mrow><msup><mrow><mfenced separators="|"><mrow><msup><mrow><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover></mrow><mrow><mi>'</mi><mi>'</mi></mrow></msup><mfenced separators="|"><mrow><mi>t</mi></mrow></mfenced></mrow></mfenced></mrow><mrow><mn>2</mn></mrow></msup><mi>d</mi><mi>t</mi></mrow></mrow></mrow></semantics></math> is minimized by a natural cubic spline, this solution is written as a basic function as;</p>
<title>3.2. Generalized Cross-Validation (GCV) Estimation Method with an Autocorrelation Structure</title><p>The term Generalized Cross-Validation (GCV) was proposed by [
<xref ref-type="bibr" rid="R14">14</xref>] and [
<xref ref-type="bibr" rid="R17">17</xref>] as a replacement for Cross-Validation (CV), it is the most popular method for choosing the complexity of statistical models. The basic principle of cross-validation is to leave the data points out one at a time and to choose the value of &#x26;#x003bb; under which the missing data points are best predicted by the remainder of the data. To be precise, let <math><semantics><mrow><msubsup><mrow><mi>g</mi></mrow><mrow><mi>λ</mi></mrow><mrow><mo>-</mo><mn>1</mn></mrow></msubsup></mrow></semantics></math> be the smoothing spline determined from all the information sets aside from <math><semantics><mrow><mo>(</mo><msub><mrow><mi>t</mi></mrow><mrow><mi>i</mi></mrow></msub><mo>,</mo><mi> </mi><msub><mrow><mi>y</mi></mrow><mrow><mi>i</mi></mrow></msub><mo>)</mo><mo>,</mo></mrow></semantics></math> utilizing the worth &#x26;#x003bb; for the smoothing boundary. The cross-validation decision regarding &#x26;#x003bb; can then be the estimation of &#x26;#x003bb; which can be written as;</p>

<disp-formula id="FD4"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><mi>C</mi><mi>V</mi><mo>(</mo><mi>λ</mi><mo>)</mo><mi mathvariant="normal"> </mi><mo>=</mo><mfrac><mrow><mn>1</mn></mrow><mrow><mi>n</mi></mrow></mfrac><mrow><mo stretchy="false">∑</mo><mrow><msup><mrow><mfenced open="{" close="}" separators="|"><mrow><msub><mrow><mi>y</mi></mrow><mrow><mi>i</mi></mrow></msub><mo>-</mo><mover accent="true"><mrow><mi>g</mi></mrow><mo>^</mo></mover><mo>(</mo><msub><mrow><mi>t</mi></mrow><mrow><mi>i</mi></mrow></msub><mo>)</mo></mrow></mfenced></mrow><mrow><mn>2</mn></mrow></msup></mrow></mrow></mrow></semantics></math></div><div class="l"><label>(4)</label></div></div></disp-formula><p>Equation (4) is similar to the test for regression model estimation [
<xref ref-type="bibr" rid="R16">16</xref>]. Define a matrix A (&#x26;#x003bb;) by;</p>

<disp-formula id="FD5"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><msub><mrow><mi>A</mi></mrow><mrow><mi>i</mi><mi>j</mi></mrow></msub><mfenced separators="|"><mrow><mi>λ</mi></mrow></mfenced><mi mathvariant="normal"> </mi><mo>=</mo><mi mathvariant="normal"> </mi><msup><mrow><mi>n</mi></mrow><mrow><mo>-</mo><mn>1</mn></mrow></msup><mi>g</mi><mfenced separators="|"><mrow><msub><mrow><mi>t</mi></mrow><mrow><mi>i</mi></mrow></msub><mo>,</mo><msub><mrow><mi>t</mi></mrow><mrow><mi>j</mi></mrow></msub></mrow></mfenced></mrow></semantics></math></div><div class="l"><label>(5)</label></div></div></disp-formula>
<disp-formula id="FD6"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><mi>C</mi><mi>V</mi><mfenced separators="|"><mrow><mi>λ</mi></mrow></mfenced><mo>=</mo><mfrac><mrow><mn>1</mn></mrow><mrow><mi>n</mi></mrow></mfrac><mrow><munderover><mo stretchy="false">∑</mo><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><mrow><mi>n</mi></mrow></munderover><mrow><mfrac><mrow><msup><mrow><mfenced open="{" close="}" separators="|"><mrow><msub><mrow><mi>y</mi></mrow><mrow><mi>i</mi></mrow></msub><mo>-</mo><mover accent="true"><mrow><mi>g</mi></mrow><mo>^</mo></mover><mo>(</mo><msub><mrow><mi>t</mi></mrow><mrow><mi>i</mi></mrow></msub><mo>)</mo></mrow></mfenced></mrow><mrow><mn>2</mn></mrow></msup></mrow><mrow><msup><mrow><mfenced open="{" close="}" separators="|"><mrow><mn>1</mn><mo>-</mo><msub><mrow><mi>A</mi></mrow><mrow><mi>i</mi><mi>i</mi></mrow></msub><mo>(</mo><mi>λ</mi><mo>)</mo></mrow></mfenced></mrow><mrow><mn>2</mn></mrow></msup></mrow></mfrac></mrow></mrow></mrow></semantics></math></div><div class="l"><label>(6)</label></div></div></disp-formula><p>Wang, Meyer &#x26;#x00026; Opsomer (2013) [
<xref ref-type="bibr" rid="R15">15</xref>] also proposed the application of a related test referred to as the Generalized Cross-validation, acquired from equation (6) by substituting <math><semantics><mrow><msub><mrow><mi>A</mi></mrow><mrow><mi>i</mi><mi>i</mi></mrow></msub><mo>(</mo><mi>λ</mi><mo>)</mo></mrow></semantics></math> with its mean value, , this gives the score.</p>

<disp-formula id="FD7"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><mi>G</mi><mi>C</mi><mi>V</mi><mfenced separators="|"><mrow><mi>λ</mi></mrow></mfenced><mo>=</mo><mfrac><mrow><msup><mrow><mi>n</mi></mrow><mrow><mo>-</mo><mn>1</mn></mrow></msup><mi>R</mi><mi>S</mi><mi>S</mi><mfenced separators="|"><mrow><mi>λ</mi></mrow></mfenced></mrow><mrow><msup><mrow><mfenced separators="|"><mrow><mn>1</mn><mo>-</mo><msup><mrow><mi>n</mi></mrow><mrow><mo>-</mo><mn>1</mn></mrow></msup><mi>t</mi><mi>r</mi><mi>A</mi><mo>(</mo><mi>λ</mi><mo>)</mo></mrow></mfenced></mrow><mrow><mn>2</mn></mrow></msup></mrow></mfrac></mrow></semantics></math></div><div class="l"><label>(7)</label></div></div></disp-formula><p>Where; RSS (&#x26;#x003bb;) refers to the residual sum of squares. [
<xref ref-type="bibr" rid="R17">17</xref>] also gave hypothetical contentions to prove that GCV ought to pick an ideal estimation of &#x26;#x003bb; in the function of minimizing the mean squared error (MSE) at the design points. The forecast published practical examples bear out a good performance in [
<xref ref-type="bibr" rid="R18">18</xref>]. The summed-up Cross-validation technique is notable for its optimal qualities [
<xref ref-type="bibr" rid="R19">19</xref>]. For any given <math><semantics><mrow><mi>n</mi><mi> </mi><mo>×</mo><mi> </mi><mi>n</mi><mo>,</mo></mrow></semantics></math> the impact matrix is given as;</p>

<disp-formula id="FD8"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><mfenced open="[" close="]" separators="|"><mrow><mtable><mtr><mtd><mrow><maligngroup /><msub><mrow><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover></mrow><mrow><mi>n</mi></mrow></msub><mo>,</mo><mi> </mi><mi>λ</mi><mfenced separators="|"><mrow><msub><mrow><mi>t</mi></mrow><mrow><mn>1</mn></mrow></msub></mrow></mfenced></mrow></mtd></mtr><mtr><mtd><mrow><maligngroup /><msub><mrow><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover></mrow><mrow><mi>n</mi></mrow></msub><mo>,</mo><mi> </mi><mi>λ</mi><mfenced separators="|"><mrow><msub><mrow><mi>t</mi></mrow><mrow><mn>2</mn></mrow></msub></mrow></mfenced></mrow></mtd></mtr><mtr><mtd><mrow><maligngroup /><mo>.</mo></mrow></mtd></mtr><mtr><mtd><mrow><maligngroup /><mo>.</mo></mrow></mtd></mtr><mtr><mtd><mrow><maligngroup /><mo>.</mo></mrow></mtd></mtr><mtr><mtd><mrow><maligngroup /><msub><mrow><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover></mrow><mrow><mi>n</mi></mrow></msub><mo>,</mo><mi> </mi><mi>λ</mi><mfenced separators="|"><mrow><msub><mrow><mi>t</mi></mrow><mrow><mi>n</mi></mrow></msub></mrow></mfenced></mrow></mtd></mtr><mtr><mtd><mrow><maligngroup /><mi> </mi></mrow></mtd></mtr></mtable></mrow></mfenced><mo>=</mo><mi>S</mi><mfenced separators="|"><mrow><mi>λ</mi></mrow></mfenced><mi>y</mi></mrow></semantics></math></div><div class="l"><label>(8)</label></div></div></disp-formula><p>therefore W<sub>0</sub> (&#x26;#x003bb;) can be revised as;</p>

<disp-formula id="FD9"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><msub><mrow><mi>W</mi></mrow><mrow><mn>0</mn></mrow></msub><mfenced separators="|"><mrow><mi>λ</mi></mrow></mfenced><mi mathvariant="normal"> </mi><mo>=</mo><mi mathvariant="normal"> </mi><mfrac><mrow><mrow><munderover><mo stretchy="false">∑</mo><mrow><mi>k</mi><mo>=</mo><mn>1</mn></mrow><mrow><mi>n</mi></mrow></munderover><mrow><msup><mrow><mfenced separators="|"><mrow><msub><mrow><mi>a</mi></mrow><mrow><mi>k</mi><mi>j</mi></mrow></msub><msub><mrow><mi>y</mi></mrow><mrow><mi>j</mi></mrow></msub><mo>-</mo><mi mathvariant="normal"> </mi><msub><mrow><mi>y</mi></mrow><mrow><mi>k</mi></mrow></msub></mrow></mfenced></mrow><mrow><mn>2</mn></mrow></msup></mrow></mrow></mrow><mrow><msup><mrow><mfenced separators="|"><mrow><mn>1</mn><mo>-</mo><msub><mrow><mi>a</mi></mrow><mrow><mi>k</mi><mi>k</mi></mrow></msub></mrow></mfenced></mrow><mrow><mn>2</mn></mrow></msup></mrow></mfrac></mrow></semantics></math></div><div class="l"><label>(9)</label></div></div></disp-formula><p>Where; Generalized Cross-Validation is the changed type of Cross-Validation, a customary method for assessing the smoothing boundary. The GCV score is built by correlation with the CV score which is gotten from the normal residuals by dividing them by<math><semantics><mrow><msub><mrow><mn>1</mn><mi> </mi><mo>-</mo><mi> </mi><mo>(</mo><msub><mrow><mi>S</mi></mrow><mrow><mi>λ</mi></mrow></msub><mo>)</mo></mrow><mrow><mi>i</mi><mi>i</mi></mrow></msub></mrow></semantics></math>. The acknowledged arrangement of GCV is to replace the documentation <math><semantics><mrow><msub><mrow><mn>1</mn><mi> </mi><mo>-</mo><mi> </mi><mo>(</mo><msub><mrow><mi>S</mi></mrow><mrow><mi>λ</mi></mrow></msub><mo>)</mo></mrow><mrow><mi>i</mi><mi>i</mi></mrow></msub></mrow></semantics></math> in Cross-Validation with a mean score of <math><semantics><mrow><mn>1</mn><mi> </mi><mo>-</mo><mi> </mi><msup><mrow><mi>n</mi></mrow><mrow><mo>-</mo><mn>1</mn></mrow></msup></mrow></semantics></math> follow<math><semantics><mrow><mo>(</mo><msub><mrow><mi>S</mi></mrow><mrow><mi>λ</mi></mrow></msub><mo>)</mo></mrow></semantics></math>. Consequently, by adding the residual squared and notation {1 &#x26;#x02212; <italic>n</italic><sup>&#x26;#x02212;1</sup> <italic>trace </italic>(<italic>S</italic><sub><italic>&#x26;#x003bb;</italic></sub>)}<sup>2</sup>, by the known conventional cross-approval, the GCV smoothing technique is composed numerically as;</p>

<disp-formula id="FD10"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><mi>G</mi><mi>C</mi><mi>V</mi><mfenced separators="|"><mrow><mi>λ</mi></mrow></mfenced><mo>=</mo><mi mathvariant="normal"> </mi><mfrac><mrow><mn>1</mn></mrow><mrow><mi>n</mi></mrow></mfrac><mi mathvariant="normal"> </mi><mfrac><mrow><mrow><munderover><mo stretchy="false">∑</mo><mrow><mi>k</mi><mo>=</mo><mn>1</mn></mrow><mrow><mi>n</mi></mrow></munderover><mrow><msup><mrow><mfenced open="{" close="}" separators="|"><mrow><mi>y</mi><mi mathvariant="normal"> </mi><mo>-</mo><mi mathvariant="normal"> </mi><msub><mrow><mi>f</mi></mrow><mrow><mi>k</mi></mrow></msub><mfenced separators="|"><mrow><msub><mrow><mi>x</mi></mrow><mrow><mn>1</mn></mrow></msub></mrow></mfenced></mrow></mfenced></mrow><mrow><mn>2</mn></mrow></msup></mrow></mrow></mrow><mrow><msup><mrow><mfenced open="{" close="}" separators="|"><mrow><mn>1</mn><mi mathvariant="normal"> </mi><mo>-</mo><mi mathvariant="normal"> </mi><msup><mrow><mi>n</mi></mrow><mrow><mo>-</mo><mn>1</mn></mrow></msup><mi>t</mi><mi>r</mi><mi>a</mi><mi>c</mi><mi>e</mi><mfenced separators="|"><mrow><mi>S</mi><mi>λ</mi></mrow></mfenced></mrow></mfenced></mrow><mrow><mn>2</mn></mrow></msup></mrow></mfrac></mrow></semantics></math></div><div class="l"><label>(10)</label></div></div></disp-formula>
<disp-formula id="FD11"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><mi>G</mi><mi>C</mi><mi>V</mi><mo>(</mo><mi>λ</mi><mo>)</mo><mi mathvariant="normal"> </mi><mo>=</mo><mfrac><mrow><msup><mrow><mi>n</mi></mrow><mrow><mo>-</mo><mn>1</mn></mrow></msup><msup><mrow><mfenced open="‖" close="‖" separators="|"><mrow><mfenced separators="|"><mrow><mi>I</mi><mi mathvariant="normal"> </mi><mo>-</mo><mi mathvariant="normal"> </mi><mi>S</mi><mi>λ</mi></mrow></mfenced><mi>y</mi></mrow></mfenced></mrow><mrow><mn>2</mn></mrow></msup></mrow><mrow><msup><mrow><mfenced open="[" close="]" separators="|"><mrow><msup><mrow><mi>n</mi></mrow><mrow><mo>-</mo><mn>1</mn></mrow></msup><mi>t</mi><mi>r</mi><mi>a</mi><mi>c</mi><mi>e</mi><mfenced separators="|"><mrow><mi>I</mi><mi mathvariant="normal"> </mi><mo>-</mo><mi>S</mi><mi>λ</mi><mi mathvariant="normal"> </mi></mrow></mfenced></mrow></mfenced></mrow><mrow><mn>2</mn></mrow></msup></mrow></mfrac><mi mathvariant="normal"> </mi></mrow></semantics></math></div><div class="l"><label>(11)</label></div></div></disp-formula><p>Where; n is observations or data set, &#x26;#x003bb; is the smoothing parameter, S&#x26;#x003bb; refers to the ith diagonal member of the smoothing matrix   </p>
<p>The first research on cross-validation was conducted by [
<xref ref-type="bibr" rid="R20">20</xref>], which was subsequently augmented to the log periodogram's smoothing [
<xref ref-type="bibr" rid="R21">21</xref>]. The term Generalized Cross-Validation (GCV) was determined by [
<xref ref-type="bibr" rid="R20">20</xref>]. The GCV score figured by similarity to the CV score can be obtained from the normal residuals by isolating them by<math><semantics><mrow><mn>1</mn><mi> </mi><mo>-</mo><mi> </mi><mo>(</mo><mi>S</mi><mi>λ</mi><mo>)</mo><mi>i</mi><mi>i</mi></mrow></semantics></math>. The essential plan of GCV is to supplant the components <math><semantics><mrow><mn>1</mn><mi> </mi><mo>-</mo><mi> </mi><mo>(</mo><mi>S</mi><mi>λ</mi><mo>)</mo><mi>i</mi><mi>i</mi></mrow></semantics></math><italic> </italic>with the mean score <math><semantics><mrow><mn>1</mn><mi> </mi><mo>-</mo><mi> </mi><mi>n</mi><mo>-</mo><mn>1</mn><mi> </mi><mi>t</mi><mi>r</mi><mo>(</mo><mi>S</mi><mi>λ</mi><mo>)</mo><mo>.</mo></mrow></semantics></math> Consequently, adding the squared revised remaining and factor <math><semantics><mrow><mo>{</mo><mn>1</mn><mi> </mi><mo>-</mo><mi> </mi><mi>n</mi><mo>-</mo><mn>1</mn><mi> </mi><mi>t</mi><mi>r</mi><mo>(</mo><mi>S</mi><mi>λ</mi><mo>)</mo><mo>}</mo><mo>.</mo></mrow></semantics></math> Given the spline smoothing for non-parametric assessment of a relapse work in a period series setting and accepting that the reaction variable <math><semantics><mrow><msub><mrow><mi>y</mi></mrow><mrow><mi>i</mi></mrow></msub></mrow></semantics></math> are taken on the occasion <math><semantics><mrow><msub><mrow><mi>t</mi></mrow><mrow><mi>i</mi></mrow></msub></mrow></semantics></math>, for <math><semantics><mrow><mi>i</mi><mi> </mi><mo>=</mo><mi> </mi><mn>1</mn><mo>,</mo><mi> </mi><mo>.</mo><mo>.</mo><mo>.</mo><mo>,</mo><mi> </mi><mi>n</mi></mrow></semantics></math> and that a model of the structure creates the <math><semantics><mrow><msub><mrow><mi>y</mi></mrow><mrow><mi>i</mi></mrow></msub></mrow></semantics></math></p>

<disp-formula id="FD12"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><msub><mrow><mi>y</mi></mrow><mrow><mi>i</mi></mrow></msub><mi mathvariant="normal"> </mi><mo>=</mo><mi mathvariant="normal"> </mi><mi>f</mi><mfenced separators="|"><mrow><msub><mrow><mi>t</mi></mrow><mrow><mi>i</mi></mrow></msub></mrow></mfenced><mi mathvariant="normal"> </mi><mo>+</mo><mi mathvariant="normal"> </mi><mi>Z</mi><mfenced separators="|"><mrow><msub><mrow><mi>t</mi></mrow><mrow><mi>i</mi></mrow></msub></mrow></mfenced></mrow></semantics></math></div><div class="l"><label>(12)</label></div></div></disp-formula><p>Where <math><semantics><mrow><mi>f</mi><mo>(</mo><mo>.</mo><mo>)</mo></mrow></semantics></math><bold> </bold>refers to the smoothing function and <math><semantics><mrow><mi>Z</mi><mfenced separators="|"><mrow><msub><mrow><mi>t</mi></mrow><mrow><mi>i</mi></mrow></msub></mrow></mfenced></mrow></semantics></math> refers to the zero-mean, Autocorrelated stationary process. It can be said that even though <italic>t</italic><sub><italic>i</italic></sub><italic> </italic>is specific, it is not uniformly spaced, with <italic>t</italic><sub><italic>1</italic></sub><italic> &lt; . . .</italic><italic><bold> </bold></italic><italic>&lt; </italic><italic>t</italic><sub><italic>n</italic></sub></p>
<p>If the <math><semantics><mrow><mi>Z</mi><mfenced separators="|"><mrow><msub><mrow><mi>t</mi></mrow><mrow><mi>i</mi></mrow></msub></mrow></mfenced></mrow></semantics></math><italic> </italic>in (12) has a known correlation function, with<math><semantics><mrow><mi>C</mi><mi>o</mi><mi>v</mi><mi>Z</mi><mfenced separators="|"><mrow><msub><mrow><mi>t</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>,</mo><mi> </mi><mo>.</mo><mi> </mi><mo>.</mo><mi> </mi><mo>.</mo><mi> </mi><mo>,</mo><mi> </mi><msub><mrow><mi>t</mi></mrow><mrow><mi>n</mi></mrow></msub></mrow></mfenced><mo>=</mo><msup><mrow><mi>σ</mi></mrow><mrow><mn>2</mn></mrow></msup><msub><mrow><mi>v</mi></mrow><mrow><mi>i</mi><mi>j</mi></mrow></msub></mrow></semantics></math>,a normal addition of the usual smoothing spline approach amongst is to estimate <italic>f</italic> by the <math><semantics><mrow><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover></mrow></semantics></math>which minimizes;</p>

<disp-formula id="FD13"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><msup><mrow><mfenced separators="|"><mrow><mi>y</mi><mi mathvariant="normal"> </mi><mo>-</mo><mi mathvariant="normal"> </mi><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover></mrow></mfenced></mrow><mrow><mi>T</mi></mrow></msup><mi>W</mi><mfenced separators="|"><mrow><mi>y</mi><mi mathvariant="normal"> </mi><mo>-</mo><mi mathvariant="normal"> </mi><mi>f</mi></mrow></mfenced><mi mathvariant="normal"> </mi><mo>+</mo><mi mathvariant="normal"> </mi><mi>λ</mi><mrow><msubsup><mo stretchy="false">∫</mo><mrow><mi>a</mi></mrow><mrow><mi>b</mi></mrow></msubsup><mrow><msup><mrow><mo>{</mo><msup><mrow><mi>f</mi></mrow><mrow><mi mathvariant="normal">'</mi><mi mathvariant="normal">'</mi></mrow></msup><mo>(</mo><mi>t</mi><mo>)</mo><mo>}</mo></mrow><mrow><mn>2</mn></mrow></msup><mi>d</mi><mi>t</mi></mrow></mrow></mrow></semantics></math></div><div class="l"><label>(13)</label></div></div></disp-formula><p>Amid every properly smoothed function f, it is confirmed that <math><semantics><mrow><mi>W</mi><mi> </mi><mo>=</mo><mi> </mi><mi>V</mi><mo>-</mo><mn>1</mn><mi> </mi><mo>=</mo><mi> </mi><mo>[</mo><msub><mrow><mi>v</mi></mrow><mrow><mi>i</mi><mi>j</mi></mrow></msub><mo>]</mo><mo>,</mo><mi> </mi><mi>y</mi><mi> </mi><mo>=</mo><mi> </mi><mo>(</mo><msub><mrow><mi>y</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>,</mo><mi> </mi><mo>.</mo><mi> </mi><mo>.</mo><mi> </mi><mo>.</mo><mi> </mi><mo>,</mo><msub><mrow><mi>y</mi></mrow><mrow><mi>n</mi></mrow></msub><mo>)</mo><mi>T</mi></mrow></semantics></math> and<math><semantics><mrow><mi> </mi><mi>f</mi><mi> </mi><mo>=</mo><mi> </mi><mo>(</mo><mi>f</mi><mo>(</mo><msub><mrow><mi>t</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>)</mo><mo>,</mo><mo>.</mo><mi> </mi><mo>.</mo><mi> </mi><mo>.</mo><mi> </mi><mo>,</mo><mi>f</mi><mo>(</mo><mi> </mi><msub><mrow><mi>t</mi></mrow><mrow><mi>n</mi></mrow></msub><mo>)</mo><mo>)</mo><mi>T</mi><mo>.</mo></mrow></semantics></math> It has been proven that the function  <math><semantics><mrow><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover></mrow></semantics></math><italic><bold> </bold></italic>remains a natural cubic spline that has knots at the<bold> </bold>t<sub>j</sub>. Also, if <math><semantics><mrow><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover></mrow></semantics></math> denotes the vector with the ith element <math><semantics><mrow><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover><mo>(</mo><msub><mrow><mi>t</mi></mrow><mrow><mi>i</mi></mrow></msub><mo>)</mo></mrow></semantics></math> then there is a matrix S<sub>&#x26;#x003bb;</sub> such that<math><semantics><mrow><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover><mi> </mi><mo>=</mo><mi> </mi><mi>S</mi><mfenced separators="|"><mrow><msub><mrow><mi>λ</mi></mrow><mrow><mi>y</mi></mrow></msub></mrow></mfenced></mrow></semantics></math><italic><bold>, </bold></italic>i.e. for fixed &#x26;#x003bb;, the estimate is a direct capacity of y. This linearity proposes a nearby association between spline smoothing and bit smoothing, as shown unequivocally in [
<xref ref-type="bibr" rid="R22">22</xref>]. One approach to picking the parameter denoted by &#x26;#x003bb; is for the<italic><bold> </bold></italic>generalized cross-correlation to be minimized [
<xref ref-type="bibr" rid="R17">17</xref>]. In the current setting, the common extension of this model is to limit equation (13); this gives a technique for assessing g within the sight of a realized autocorrelation structure. Concerning span assessment of g, the Bayesian assumption presented by [
<xref ref-type="bibr" rid="R22">22</xref>] extends with the connection network and V is replaced by Silverman's inverse inclining weighting lattice, which presents the posterior difference matrix, written as; </p>

<disp-formula id="FD14"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><mi>V</mi><mi>a</mi><mi>r</mi><mfenced separators="|"><mrow><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover></mrow></mfenced><mi mathvariant="normal"> </mi><mo>=</mo><msup><mrow><mi>σ</mi></mrow><mrow><mn>2</mn></mrow></msup><mi>A</mi><mfenced separators="|"><mrow><mi>λ</mi></mrow></mfenced><mi>V</mi></mrow></semantics></math></div><div class="l"><label>(14)</label></div></div></disp-formula><p>The minimization of GCV (&#x26;#x003bb;) as proposed by [
<xref ref-type="bibr" rid="R23">23</xref>] and [
<xref ref-type="bibr" rid="R24">24</xref>] is written as;</p>

<disp-formula id="FD15"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><mi>G</mi><mi>C</mi><mi>V</mi><mo>(</mo><mi>λ</mi><mo>)</mo><mi mathvariant="normal"> </mi><mo>=</mo><mi mathvariant="normal"> </mi><mfrac><mrow><msup><mrow><mfenced separators="|"><mrow><mi>y</mi><mi mathvariant="normal"> </mi><mo>-</mo><mi mathvariant="normal"> </mi><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover></mrow></mfenced></mrow><mrow><mi>T</mi></mrow></msup><mi>W</mi><mfenced separators="|"><mrow><mi>y</mi><mi mathvariant="normal"> </mi><mo>-</mo><mi mathvariant="normal"> </mi><mi>f</mi></mrow></mfenced></mrow><mrow><msup><mrow><mfenced open="[" close="]" separators="|"><mrow><mi>t</mi><mi>r</mi><mi>a</mi><mi>c</mi><mi>e</mi><mfenced separators="|"><mrow><mi>I</mi><mi mathvariant="normal"> </mi><mo>-</mo><mi mathvariant="normal"> </mi><mi>S</mi><mi>λ</mi></mrow></mfenced></mrow></mfenced></mrow><mrow><mn>2</mn></mrow></msup></mrow></mfrac></mrow></semantics></math></div><div class="l"><label>(15)</label></div></div></disp-formula><p>Where; (<math><semantics><mrow><mi>S</mi><mi>λ</mi></mrow></semantics></math>)<sub> </sub>is the <math><semantics><mrow><mi>i</mi><mi>t</mi><mi>h</mi></mrow></semantics></math> diagonal element of the smoother matrix, , the correlation function, y is <math><semantics><mrow><mo>(</mo><msub><mrow><mi>y</mi></mrow><mrow><mi>i</mi></mrow></msub><mo>,</mo><mi mathvariant="normal"> </mi><mo>.</mo><mi mathvariant="normal"> </mi><mo>.</mo><mi mathvariant="normal"> </mi><mo>.</mo><mi mathvariant="normal"> </mi><mo>,</mo><msub><mrow><mi>y</mi></mrow><mrow><mi>n</mi></mrow></msub><mo>)</mo><mi>T</mi><mo>,</mo><mi mathvariant="normal"> </mi><mi>f</mi><mi mathvariant="normal"> </mi><mo>=</mo><mi mathvariant="normal"> </mi><mo>(</mo><mi>f</mi><mo>(</mo><msub><mrow><mi>t</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>)</mo><mi mathvariant="normal"> </mi><mo>,</mo><mo>.</mo><mi mathvariant="normal"> </mi><mo>.</mo><mi mathvariant="normal"> </mi><mo>.</mo><mi mathvariant="normal"> </mi><mo>,</mo><mi>f</mi><mo>(</mo><msub><mrow><mi>t</mi></mrow><mrow><mi>n</mi></mrow></msub><mo>)</mo><mo>)</mo><mi>T</mi><mi mathvariant="normal"> </mi><mi mathvariant="normal"> </mi></mrow></semantics></math></p>
<title>3.3. Generalized Maximum Likelihood (GML) Estimation Method with Autocorrelation Structure</title><p>[
<xref ref-type="bibr" rid="R25">25</xref>] proposed the GML technique for correlated data that possesses a single parameter for smoothing observations. However, there exist two parameters for smoothing in the case of a bivariate model that should be assessed along with the covariance boundaries. Following a comparative determination, GML is given as;</p>

<disp-formula id="FD16"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><mi>G</mi><mi>M</mi><mi>L</mi><mfenced separators="|"><mrow><mi>λ</mi></mrow></mfenced><mi mathvariant="normal"> </mi><mo>=</mo><mi mathvariant="normal"> </mi><mfrac><mrow><msup><mrow><mi>y</mi></mrow><mrow><mi mathvariant="normal">ᴵ</mi></mrow></msup><mfenced separators="|"><mrow><mi>I</mi><mi mathvariant="normal"> </mi><mo>-</mo><mi mathvariant="normal"> </mi><mi>S</mi><mi>λ</mi></mrow></mfenced></mrow><mrow><msup><mrow><mfenced open="[" close="]" separators="|"><mrow><msup><mrow><mi>d</mi><mi>e</mi><mi>t</mi></mrow><mrow><mo>+</mo></mrow></msup><mfenced separators="|"><mrow><mi>I</mi><mi mathvariant="normal"> </mi><mo>-</mo><mi>S</mi><mi>λ</mi><mi mathvariant="normal"> </mi></mrow></mfenced></mrow></mfenced></mrow><mrow><mfrac><mrow><mn>1</mn></mrow><mrow><mi>n</mi><mo>-</mo><mi>m</mi></mrow></mfrac></mrow></msup></mrow></mfrac><mi mathvariant="normal"> </mi></mrow></semantics></math></div><div class="l"><label>(16)</label></div></div></disp-formula><p>Where; det<sup>+</sup> <math><semantics><mrow><mfenced separators="|"><mrow><mi>I</mi><mi> </mi><mo>-</mo><mi> </mi><mi>S</mi><mi>λ</mi></mrow></mfenced></mrow></semantics></math> refers to the product of <math><semantics><mrow><mo>(</mo><mi>n</mi><mo>-</mo><mi> </mi><mi>m</mi><mo>)</mo></mrow></semantics></math><italic> </italic>non-zero eigenvalues of<math><semantics><mrow><mi> </mi><mfenced separators="|"><mrow><mi>I</mi><mi> </mi><mo>-</mo><mi> </mi><mi>S</mi><mi>λ</mi></mrow></mfenced></mrow></semantics></math>. [
<xref ref-type="bibr" rid="R25">25</xref>] provided a Bayesian model for the GML method's general framework and can calculate a spline estimate's posterior confidence intervals. Suppose that the data are simulated via the;</p>

<disp-formula id="FD17"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><msub><mrow><mi mathvariant="normal"> </mi><mi>y</mi></mrow><mrow><mi>i</mi></mrow></msub><mo>=</mo><mi>f</mi><mfenced separators="|"><mrow><msub><mrow><mi>t</mi></mrow><mrow><mi>i</mi></mrow></msub></mrow></mfenced><mo>+</mo><msub><mrow><mi>ε</mi></mrow><mrow><mi>i</mi></mrow></msub><mo>,</mo><mi mathvariant="normal"> </mi><mi mathvariant="normal"> </mi><mi>i</mi><mi mathvariant="normal"> </mi><mo>=</mo><mi mathvariant="normal"> </mi><mn>1,2</mn><mo>,</mo><mi mathvariant="normal"> </mi><mo>.</mo><mi mathvariant="normal"> </mi><mo>.</mo><mi mathvariant="normal"> </mi><mo>.</mo><mi mathvariant="normal"> </mi><mo>,</mo><mi mathvariant="normal"> </mi><mi>n</mi><mo>,</mo><mi mathvariant="normal"> </mi><mi mathvariant="normal"> </mi><mi mathvariant="normal"> </mi><mi mathvariant="normal"> </mi><mi mathvariant="normal"> </mi><msub><mrow><mi>t</mi></mrow><mrow><mi>i</mi></mrow></msub><mi>ϵ</mi><mfenced open="[" close="]" separators="|"><mrow><mn>0,1</mn></mrow></mfenced></mrow></semantics></math></div><div class="l"><label>(17)</label></div></div></disp-formula><p>Where; <math><semantics><mrow><mo>∈</mo><mo>=</mo><mfenced separators="|"><mrow><msub><mrow><mo>∈</mo></mrow><mrow><mn>1</mn></mrow></msub><mo>,</mo><mi> </mi><mo>.</mo><mi> </mi><mo>.</mo><mi> </mi><mo>.</mo><mi> </mi><mo>,</mo><mi> </mi><msub><mrow><mo>∈</mo></mrow><mrow><mi>n</mi></mrow></msub></mrow></mfenced></mrow></semantics></math>~<math><semantics><mrow><mi>N</mi><mo>(</mo><mn>0</mn><mo>,</mo><msup><mrow><mi>σ</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>,</mo><mi> </mi><msup><mrow><mi>W</mi></mrow><mrow><mo>-</mo><mn>1</mn></mrow></msup><mo>)</mo></mrow></semantics></math> which do not dependent on <italic>f, </italic>Model (17) is usually referred to as a Bayesian model, it can also be known as a hierarchical model or a mixed-effects model. This Bayesian model is similar to the model illustrated by [
<xref ref-type="bibr" rid="R19">19</xref>], though the residuals are correlated. Based on the justification of [
<xref ref-type="bibr" rid="R19">19</xref>], it can be shown that;</p>

<disp-formula id="FD18"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><mrow><mrow><munder><mrow><mi mathvariant="normal">lim</mi></mrow><mrow><mi>n</mi><mo>→</mo><mi mathvariant="normal">∞</mi></mrow></munder></mrow><mo>⁡</mo><mrow><mi>E</mi><mfenced separators="|"><mrow><mi>f</mi><mo>(</mo><mi>t</mi></mrow></mfenced><mo>/</mo><mi>y</mi><mo>)</mo></mrow></mrow><mo>=</mo><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover><mi>t</mi><mi mathvariant="normal"> </mi><mi>a</mi><mi>n</mi><mi>d</mi><mi mathvariant="normal"> </mi><mrow><mrow><munder><mrow><mi mathvariant="italic">lim</mi></mrow><mrow><mi>n</mi><mo>→</mo><mi mathvariant="normal">∞</mi></mrow></munder></mrow><mo>⁡</mo><mrow><mi>c</mi><mi>o</mi><mi>v</mi><mo>(</mo><mi>f</mi><mo>/</mo><mi>y</mi><mo>)</mo></mrow></mrow><mo>=</mo><msup><mrow><mi>σ</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>,</mo><mi mathvariant="normal"> </mi><msup><mrow><mi>W</mi></mrow><mrow><mo>-</mo><mn>1</mn></mrow></msup></mrow></semantics></math></div><div class="l"><label>(18)</label></div></div></disp-formula><p>Where; <math><semantics><mrow><mi>F</mi><mi mathvariant="bold-italic"> </mi><mo>=</mo><mi> </mi><mo>(</mo><mi>F</mi><mi> </mi><mo>(</mo><msub><mrow><mi>t</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>)</mo><mo>.</mo><mo>,</mo><mi> </mi><mo>.</mo><mi> </mi><mo>.</mo><mi> </mi><mo>,</mo><mi> </mi><mi>F</mi><mo>(</mo><msub><mrow><mi>t</mi></mrow><mrow><mi>n</mi></mrow></msub><mo>)</mo><mo>)</mo><mi>’</mi></mrow></semantics></math>and<math><semantics><mrow><mi>σ</mi><mo>→</mo><mi>∞</mi></mrow></semantics></math> <italic> </italic>expanded prior<italic> </italic>are estimated for polynomial coefficients with degrees smaller than m<italic>.</italic> </p>
<p>According to [
<xref ref-type="bibr" rid="R25">25</xref>], the covariance matrix W<sup>-1</sup> relies on several correlations with parameter vector of<math><semantics><mrow><mi>τ</mi></mrow></semantics></math><italic>. </italic>Interestingly, covariance structures refer to first-order autoregressive for time-series observation, structured symmetry or unstructured for repeated measurements, and spatial data. GML with Autocorrelation structure is therefore given by;</p>

<disp-formula id="FD19"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><mi>G</mi><mi>M</mi><mi>L</mi><mfenced separators="|"><mrow><mi>λ</mi></mrow></mfenced><mi mathvariant="normal"> </mi><mo>=</mo><mi mathvariant="normal"> </mi><mfrac><mrow><msup><mrow><mi>λ</mi></mrow><mrow><mi mathvariant="normal">ᴵ</mi></mrow></msup><mi>W</mi><mfenced separators="|"><mrow><mi>I</mi><mi mathvariant="normal"> </mi><mo>-</mo><mi mathvariant="normal"> </mi><mi>S</mi><mi>λ</mi></mrow></mfenced></mrow><mrow><msup><mrow><mfenced open="[" close="]" separators="|"><mrow><msup><mrow><mi>d</mi><mi>e</mi><mi>t</mi></mrow><mrow><mo>+</mo></mrow></msup><mi>W</mi><mfenced separators="|"><mrow><mi>I</mi><mi mathvariant="normal"> </mi><mo>-</mo><mi mathvariant="normal"> </mi><mi>S</mi><mi>λ</mi></mrow></mfenced></mrow></mfenced></mrow><mrow><mfrac><mrow><mn>1</mn></mrow><mrow><mi>n</mi><mo>-</mo><mi>m</mi></mrow></mfrac></mrow></msup></mrow></mfrac></mrow></semantics></math></div><div class="l"><label>(19)</label></div></div></disp-formula><p>Where; <math><semantics><mrow><msup><mrow><mi>d</mi><mi>e</mi><mi>t</mi></mrow><mrow><mo>+</mo></mrow></msup><mo>(</mo><mi>I</mi><mi> </mi><mo>-</mo><mi>S</mi><mi>λ</mi><mo>)</mo></mrow></semantics></math> is the product of the <math><semantics><mrow><mi>n</mi><mi> </mi><mo>–</mo><mi> </mi><mi>m</mi></mrow></semantics></math> nonzero eigenvalues of (I &#x26;#x02013; S&#x26;#x003bb;), &#x26;#x003bb; is the Smoothing parameter, <math><semantics><mrow><mi>W</mi></mrow></semantics></math> is the structure of the correlation, <math><semantics><mrow><mi>S</mi><mi>λ</mi></mrow></semantics></math> is the smoother matrix diagonal elements, <math><semantics><mrow><mi>n</mi><mo>=</mo><msub><mrow><mi>n</mi></mrow><mrow><mn>1</mn></mrow></msub><mi> </mi><mo>+</mo><mi> </mi><msub><mrow><mi>n</mi></mrow><mrow><mn>2</mn></mrow></msub></mrow></semantics></math> are the pair of observations and <math><semantics><mrow><mi>m</mi></mrow></semantics></math> = the number of zero eigenvalues.</p>
<title>3.4. Unbiased Risk (UBR) Estimation Method with Autocorrelation Structure</title><p>Unbiased Risk is also known as Mallow&#x26;#x02019;s CP criterion; it was developed by [
<xref ref-type="bibr" rid="R26">26</xref>] to evaluate the regression model fit dependency on Ordinary Least Square (OLS). It is used to estimate choice situations where explanatory variables can predict a few results and locate the best model associated with the subset of independent variables. The more modest the estimation of the Cp, the generally exact it is, the Cp is written numerically as;</p>

<disp-formula id="FD20"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><mi>U</mi><mi>B</mi><mi>R</mi><mi mathvariant="normal"> </mi><mo>(</mo><mi>λ</mi><mo>)</mo><mi mathvariant="normal"> </mi><mo>=</mo><mfrac><mrow><mi mathvariant="normal"> </mi><msup><mrow><mfenced open="‖" close="‖" separators="|"><mrow><mo>(</mo><mi>S</mi><mi>λ</mi><mi mathvariant="normal"> </mi><mo>-</mo><mi mathvariant="normal"> </mi><mi>I</mi><mo>)</mo><mi>y</mi></mrow></mfenced></mrow><mrow><mn>2</mn></mrow></msup></mrow><mrow><mi>t</mi><mi>r</mi><mfenced separators="|"><mrow><mi>I</mi><mo>-</mo><mi>S</mi><mi>λ</mi></mrow></mfenced></mrow></mfrac></mrow></semantics></math></div><div class="l"><label>(20)</label></div></div></disp-formula><p>[
<xref ref-type="bibr" rid="R25">25</xref>] provides the UBR technique that can be used effectively to choose a smoothing parameter for cubic spline smoothing that possesses non-Gaussian information. It was developed by using Predictive Mean Square Errors (PMSE).  </p>
<p>The Unbiased Risk with Autocorrelation structure can be written mathematically as;</p>

<disp-formula id="FD21"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><mi>U</mi><mi>B</mi><mi>R</mi><mfenced separators="|"><mrow><mi>λ</mi></mrow></mfenced><mo>=</mo><mi mathvariant="normal"> </mi><mfrac><mrow><mfrac><mrow><mn>1</mn></mrow><mrow><mi>n</mi></mrow></mfrac><msup><mrow><mfenced open="‖" close="‖" separators="|"><mrow><msup><mrow><mi>W</mi></mrow><mrow><mfrac><mrow><mi>k</mi></mrow><mrow><mn>2</mn></mrow></mfrac></mrow></msup><mfenced separators="|"><mrow><mi>I</mi><mi mathvariant="normal"> </mi><mo>-</mo><mi mathvariant="normal"> </mi><mi>S</mi><mi>λ</mi></mrow></mfenced><mi>y</mi></mrow></mfenced></mrow><mrow><mn>2</mn></mrow></msup></mrow><mrow><msup><mrow><mfenced open="[" close="]" separators="|"><mrow><mfrac><mrow><mn>1</mn></mrow><mrow><mi>n</mi></mrow></mfrac><mi>t</mi><mi>r</mi><mi>a</mi><mi>c</mi><mi>e</mi><mfenced separators="|"><mrow><msup><mrow><mi>W</mi></mrow><mrow><mi>k</mi><mo>-</mo><mn>1</mn></mrow></msup><mfenced separators="|"><mrow><mi>I</mi><mi mathvariant="normal"> </mi><mo>-</mo><mi mathvariant="normal"> </mi><mi>S</mi><mi>λ</mi></mrow></mfenced></mrow></mfenced></mrow></mfenced></mrow><mrow><mn>2</mn></mrow></msup></mrow></mfrac><mo>;</mo><mi mathvariant="normal"> </mi><mi mathvariant="normal">k</mi><mi mathvariant="normal"> </mi><mo>=</mo><mi mathvariant="normal"> </mi><mn>0</mn><mo>,</mo><mi mathvariant="normal"> </mi><mn>1</mn><mo>,</mo><mi mathvariant="normal"> </mi><mn>2</mn></mrow></semantics></math></div><div class="l"><label>(21)</label></div></div></disp-formula><p>Where; n is the measurement/observations <math><semantics><mrow><mo>{</mo><mi>x</mi><mi>i</mi><mo>,</mo><mi>y</mi><mi>i</mi><mo>}</mo><mo>,</mo></mrow></semantics></math> W is the Autocorrelation structure, &#x26;#x003bb; is the parameter used for smoothing and <math><semantics><mrow><mi>S</mi><mi>λ</mi></mrow></semantics></math> is the matrix smoother of the <math><semantics><mrow><mi>i</mi><mi>t</mi><mi>h</mi></mrow></semantics></math> diagonal member.</p>
<title>3.5. Proposed Smoothing Method (PSM) with Autocorrelation Structure</title><p>A smoothing spline model is usually written as:</p>

<disp-formula id="FD22"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><msub><mrow><mi>y</mi></mrow><mrow><mi>i</mi></mrow></msub><mo>=</mo><mi>f</mi><mfenced separators="|"><mrow><msub><mrow><mi>x</mi></mrow><mrow><mi>i</mi></mrow></msub></mrow></mfenced><mi mathvariant="normal"> </mi><mo>+</mo><mi mathvariant="normal"> </mi><msub><mrow><mi>ε</mi></mrow><mrow><mi>i</mi></mrow></msub></mrow></semantics></math></div><div class="l"><label>(22)</label></div></div></disp-formula><p>Where; y refers to the response variable, x refers to a predictor variable, f is the Regression function and . </p>
<p>There are several options to examine whenever model (22) is used for non-linearity, it incorporates, observation change, and adds substance items, for example, cubic spline and Spline smoothing. This research work is keen on spline smoothing because it examines non-linearity dependent on regression bend by presenting a wrinkle or twists in these crimps created by pivot work, and the place of the turn on the fit is called hitches. </p>
<p>The traditional regression analysis's primary purpose is to minimize the residual sum of squares (RSS); the model with the minimum RSS is the preferred model. It is important to note that [
<xref ref-type="bibr" rid="R27">27</xref>] proposed Cross-Validation (CV) as a technique for estimating Spline Smoothing. Instead of RSS in the customary straightforward simple regression, the residual is characterized.</p>
<p>In this manner, an improved spline smoothing technique is proposed by adding the weighted parameters <math><semantics><mrow><mi mathvariant="normal">k</mi></mrow></semantics></math> and <math><semantics><mrow><mi mathvariant="normal">k</mi><mo>-</mo><mn>1</mn></mrow></semantics></math> with the other properties and qualities of the UBR and GCV [
<xref ref-type="bibr" rid="R28">28</xref>] and [
<xref ref-type="bibr" rid="R29">29</xref>]. The combination of the two smoothing methods' quantities will result in the optimal performance of smoothing methods whose model does not overfit time-series observations. The minimizer is the Proposed Smoothing Method (PSM) with autocorrelation structure given as;</p>
<p>PSM = (k) overfitting and optimal knot detector + <math><semantics><mrow><mo>(</mo><mn>1</mn><mo>-</mo><mi>k</mi><mo>)</mo></mrow></semantics></math> best for forecasting non-Gaussian data</p>

<disp-formula id="FD23"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><mi>P</mi><mi>S</mi><mi>M</mi><mi mathvariant="normal"> </mi><mfenced separators="|"><mrow><mi>λ</mi></mrow></mfenced><mo>=</mo><mi>k</mi><mfrac><mrow><msup><mrow><mfenced separators="|"><mrow><mi>y</mi><mi mathvariant="normal"> </mi><mo>-</mo><mi mathvariant="normal"> </mi><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover></mrow></mfenced></mrow><mrow><mi>T</mi></mrow></msup><mi>W</mi><mfenced separators="|"><mrow><mi>y</mi><mi mathvariant="normal"> </mi><mo>-</mo><mi mathvariant="normal"> </mi><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover></mrow></mfenced></mrow><mrow><msup><mrow><mfenced open="[" close="]" separators="|"><mrow><mi>t</mi><mi>r</mi><mi>a</mi><mi>c</mi><mi>e</mi><mfenced separators="|"><mrow><mi>I</mi><mi mathvariant="normal"> </mi><mo>-</mo><mi mathvariant="normal"> </mi><mi>S</mi><mi>λ</mi></mrow></mfenced></mrow></mfenced></mrow><mrow><mn>2</mn></mrow></msup></mrow></mfrac><mi mathvariant="normal"> </mi><mo>+</mo><mo>(</mo><mn>1</mn><mo>-</mo><mi>k</mi><mo>)</mo><mfrac><mrow><mfrac><mrow><mn>1</mn></mrow><mrow><mi>n</mi></mrow></mfrac><msup><mrow><mfenced open="‖" close="‖" separators="|"><mrow><msup><mrow><mi>W</mi></mrow><mrow><mfrac><mrow><mi>g</mi></mrow><mrow><mn>2</mn></mrow></mfrac><mi mathvariant="normal"> </mi></mrow></msup><mfenced separators="|"><mrow><mi>I</mi><mo>-</mo><mi>S</mi><mi>λ</mi></mrow></mfenced></mrow></mfenced></mrow><mrow><mn>2</mn></mrow></msup></mrow><mrow><msup><mrow><mfenced open="[" close="]" separators="|"><mrow><mfrac><mrow><mn>1</mn></mrow><mrow><mi>n</mi></mrow></mfrac><mi>t</mi><mi>r</mi><mi>a</mi><mi>c</mi><mi>e</mi><mfenced open="{" close="}" separators="|"><mrow><msup><mrow><mi>W</mi></mrow><mrow><mi>g</mi><mo>-</mo><mn>1</mn></mrow></msup><mfenced separators="|"><mrow><mi>I</mi><mi mathvariant="normal"> </mi><mo>-</mo><mi>S</mi><mi>λ</mi><mi mathvariant="normal"> </mi></mrow></mfenced></mrow></mfenced></mrow></mfenced></mrow><mrow><mn>2</mn></mrow></msup></mrow></mfrac></mrow></semantics></math></div><div class="l"><label>(23)</label></div></div></disp-formula><p>The behavior of the minimized &#x26;#x003bb; in UBR and GCV techniques under the alternate value of g = 1 as the optimum value of PSM yields; </p>

<disp-formula id="FD24"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><mi>P</mi><mi>S</mi><mi>M</mi><mi mathvariant="normal"> </mi><mfenced separators="|"><mrow><mi>λ</mi></mrow></mfenced><mo>=</mo><mi>k</mi><mfrac><mrow><msup><mrow><mfenced separators="|"><mrow><mi>y</mi><mi mathvariant="normal"> </mi><mo>-</mo><mi mathvariant="normal"> </mi><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover></mrow></mfenced></mrow><mrow><mi>T</mi></mrow></msup><mi>W</mi><mfenced separators="|"><mrow><mi>y</mi><mi mathvariant="normal"> </mi><mo>-</mo><mi mathvariant="normal"> </mi><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover></mrow></mfenced></mrow><mrow><msup><mrow><mfenced open="[" close="]" separators="|"><mrow><mi>t</mi><mi>r</mi><mi>a</mi><mi>c</mi><mi>e</mi><mfenced separators="|"><mrow><mi>I</mi><mi mathvariant="normal"> </mi><mo>-</mo><mi>S</mi><mi>λ</mi><mi mathvariant="normal"> </mi></mrow></mfenced></mrow></mfenced></mrow><mrow><mn>2</mn></mrow></msup></mrow></mfrac><mi mathvariant="normal"> </mi><mo>+</mo><mo>(</mo><mn>1</mn><mo>-</mo><mi>k</mi><mo>)</mo><mfrac><mrow><mfrac><mrow><mn>1</mn></mrow><mrow><mi>n</mi></mrow></mfrac><msup><mrow><mfenced open="‖" close="‖" separators="|"><mrow><msup><mrow><mi>W</mi></mrow><mrow><mfrac><mrow><mn>1</mn></mrow><mrow><mn>2</mn></mrow></mfrac><mi mathvariant="normal"> </mi></mrow></msup><mfenced separators="|"><mrow><mi>I</mi><mo>-</mo><mi>S</mi><mi>λ</mi></mrow></mfenced></mrow></mfenced></mrow><mrow><mn>2</mn></mrow></msup></mrow><mrow><msup><mrow><mfenced open="[" close="]" separators="|"><mrow><mfrac><mrow><mn>1</mn></mrow><mrow><mi>n</mi></mrow></mfrac><mi>t</mi><mi>r</mi><mi>a</mi><mi>c</mi><mi>e</mi><mfenced open="{" close="}" separators="|"><mrow><mi>W</mi><mfenced separators="|"><mrow><mi>I</mi><mi mathvariant="normal"> </mi><mo>-</mo><mi>S</mi><mi>λ</mi><mi mathvariant="normal"> </mi></mrow></mfenced></mrow></mfenced></mrow></mfenced></mrow><mrow><mn>2</mn></mrow></msup></mrow></mfrac></mrow></semantics></math></div><div class="l"><label>(24)</label></div></div></disp-formula><p>The proposed method for estimating <italic><bold>f</bold></italic> is given in (27) subject to the condition that <math><semantics><mrow><mn>0</mn><mo>&lt;</mo><mi>g</mi><mo>&lt;</mo><mn>1</mn></mrow></semantics></math> is chosen, using the algorithm in section 3.6. [
<xref ref-type="bibr" rid="R30">30</xref>,<xref ref-type="bibr" rid="R31">31</xref>,<xref ref-type="bibr" rid="R32">32</xref>].</p>
<p>Where; n is the number of the dataset, k is the weighted value, , W = V<sup>-1</sup> = Correlation Matrix for the error term, <italic>y </italic>= (<italic>y</italic><sub>1</sub>, . . . ,<italic>y</italic><sub><italic>n</italic></sub>)<sup><italic>T</italic></sup><italic> </italic>is the Smoothing function, <math><semantics><mrow><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover><mi> </mi><mo>=</mo><mi> </mi><msup><mrow><mfenced separators="|"><mrow><mi>f</mi><mfenced separators="|"><mrow><msub><mrow><mi>t</mi></mrow><mrow><mn>1</mn></mrow></msub></mrow></mfenced><mo>.</mo><mi> </mi><mi> </mi><mo>.</mo><mi> </mi><mi> </mi><mo>.</mo><mi>f</mi><mfenced separators="|"><mrow><msub><mrow><mi>t</mi></mrow><mrow><mi>n</mi></mrow></msub></mrow></mfenced><mo>)</mo><mo>.</mo><mi> </mi><msub><mrow><mi>y</mi></mrow><mrow><mi>n</mi></mrow></msub></mrow></mfenced></mrow><mrow><mi>T</mi></mrow></msup></mrow></semantics></math> = S<sub>&#x26;#x003bb;</sub>y, S&#x26;#x003bb; is the diagonal member of the smoothing matrix, <math><semantics><mrow><mfenced open="‖" close="‖" separators="|"><mrow><msup><mrow><mi>W</mi></mrow><mrow><mfrac><mrow><mn>1</mn></mrow><mrow><mn>2</mn></mrow></mfrac><mi> </mi></mrow></msup><mfenced separators="|"><mrow><mi>I</mi><mo>-</mo><mi>S</mi><mi>λ</mi></mrow></mfenced><mi>y</mi></mrow></mfenced></mrow></semantics></math>= norm of the Euclidean vector <math><semantics><mrow><msup><mrow><mi>W</mi></mrow><mrow><mfrac><mrow><mn>1</mn></mrow><mrow><mn>2</mn></mrow></mfrac></mrow></msup><mo>(</mo><mi>y</mi><mo>-</mo><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover><mo>)</mo></mrow></semantics></math>.</p>
<title>3.6. Proposed Smoothing Method (PSM) Algorithm</title><p>Step 1: Read the simulated sample data<math><semantics><mrow><mfenced separators="|"><mrow><msub><mrow><mi>x</mi></mrow><mrow><mi>i</mi></mrow></msub><mo>,</mo><mi> </mi><msub><mrow><mi>y</mi></mrow><mrow><mi>i</mi></mrow></msub></mrow></mfenced></mrow></semantics></math> for i = 1 &#x26;#x02013; T and for each of the determine the Pre-selected smoothing parameters<math><semantics><mrow><msub><mrow><mi>λ</mi></mrow><mrow><mn>1</mn></mrow></msub><mi> </mi><mo>,</mo><mi> </mi><mo>.</mo><mi> </mi><mi> </mi><mo>.</mo><mi> </mi><mi> </mi><mo>.</mo><mi> </mi><mo>,</mo><msub><mrow><mi>λ</mi></mrow><mrow><mi>t</mi></mrow></msub></mrow></semantics></math>, calculate the respective set of smoothing Spline estimates <math><semantics><mrow><mi>f</mi><mfenced separators="|"><mrow><mi>λ</mi></mrow></mfenced><mo>=</mo><mi> </mi><mfenced open="{" close="}" separators="|"><mrow><msub><mrow><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover></mrow><mrow><mi>λ</mi><mn>1</mn></mrow></msub><mi> </mi><mo>,</mo><mi> </mi><mo>.</mo><mi> </mi><mo>.</mo><mi> </mi><mo>.</mo><mi> </mi><mo>,</mo><mi> </mi><msub><mrow><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover></mrow><mrow><mi>λ</mi><mi>t</mi></mrow></msub><mi> </mi><mi> </mi></mrow></mfenced><mi> </mi><mi> </mi></mrow></semantics></math></p>
<p>Step 2: For the given &#x26;#x003bb;, &#x26;#x003c3; and T use the data in 1 above to fit a curve and the estimate ahead by linear extension <math><semantics><mrow><mi>f</mi><mfenced separators="|"><mrow><msub><mrow><mi>x</mi></mrow><mrow><mi>i</mi></mrow></msub></mrow></mfenced></mrow></semantics></math> and <math><semantics><mrow><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover><mfenced separators="|"><mrow><msub><mrow><mi>x</mi></mrow><mrow><mi>i</mi></mrow></msub></mrow></mfenced></mrow></semantics></math></p>
<p>Step 3: Insert the weighted value (k) of the coefficients of GCV and UBR</p>
<p>Step 4: Obtain the predictive mean square error<math><semantics><mrow><mi> </mi><mi>P</mi><mi>M</mi><mi>S</mi><mi>E</mi><mi> </mi><mo>(</mo><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover><mi>λ</mi><mo>)</mo><mo>=</mo><mi> </mi><mrow><munderover><mo stretchy="false">∑</mo><mrow><mi>i</mi><mi> </mi><mo>=</mo><mi> </mi><mn>1</mn></mrow><mrow><mi>t</mi></mrow></munderover><mrow><mfenced open="[" close="]" separators="|"><mrow><msup><mrow><mfenced separators="|"><mrow><mi>f</mi><mfenced separators="|"><mrow><mi>x</mi><mo>)</mo><mi> </mi><mo>-</mo><mi> </mi><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover><mo>(</mo><mi>x</mi><mi>i</mi><mo>)</mo></mrow></mfenced></mrow></mfenced></mrow><mrow><mn>2</mn></mrow></msup></mrow></mfenced></mrow></mrow></mrow></semantics></math> for these points </p>
<p>Step 5: add all values of PMSEs to get the resulting PSM value for the given &#x26;#x003bb; and &#x26;#x003c1;.</p>
<p>Step 6: Repeat steps 1&#x26;#x02013;5 for 1000 times<bold>.</bold></p>
<title>3.7. Monte Carlo Simulation study</title><p>This part is concerned with the outcome of a Monte Carlo simulation study. This study was led to assess the achievement of the four smoothing techniques depicted in this research, for example, GML, GCV, UBR, and PSM. The dataset was generated by applying a program written in R (version 3.2.3) for time-series sample sizes of; 20, 60, and 100. The experiment was replicated at 1,000 for every one of the examples. The Predictive Mean Squared-Errors (PMSE), adjusted R-Square and predicted R-square was utilized to assess the smoothing techniques' quality and performance for each simulated data.</p>
<title>3.8. Equation used to generate the value in the data</title><p>The data generation study performed to assess and measure the performance of the four spline smoothing methods is given as; </p>

<disp-formula id="FD25"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><mi>y</mi><mfenced separators="|"><mrow><mi>t</mi></mrow></mfenced><mi mathvariant="normal"> </mi><mo>=</mo><mn>2</mn><mi>S</mi><mi>i</mi><mi>n</mi><mfenced separators="|"><mrow><mfrac><mrow><mi>π</mi></mrow><mrow><mi>t</mi></mrow></mfrac></mrow></mfenced><mi mathvariant="normal"> </mi><mo>+</mo><mi mathvariant="normal"> </mi><msub><mrow><mi>ε</mi></mrow><mrow><mi>t</mi><mo>⋯</mo><mo>⋯</mo></mrow></msub><mi mathvariant="normal">t</mi><mi mathvariant="normal"> </mi><mo>=</mo><mi mathvariant="normal"> </mi><mn>20</mn><mo>,</mo><mi mathvariant="normal"> </mi><mn>60</mn><mo>,</mo><mi mathvariant="normal"> </mi><mi mathvariant="normal">a</mi><mi mathvariant="normal">n</mi><mi mathvariant="normal">d</mi><mi mathvariant="normal"> </mi><mn>100</mn></mrow></semantics></math></div><div class="l"><label>(25)</label></div></div></disp-formula><p>Where; <math><semantics><mrow><mi>π</mi><mo>=</mo><msup><mrow><mn>180</mn></mrow><mrow><mn>0</mn></mrow></msup></mrow></semantics></math>, <math><semantics><mrow><msub><mrow><mi>ε</mi></mrow><mrow><mi>t</mi></mrow></msub><mi> </mi><mo>~</mo><mi> </mi><mi>N</mi><mfenced separators="|"><mrow><mn>0</mn><mo>,</mo><mi> </mi><mi>σ</mi><msup><mrow><mi>W</mi></mrow><mrow><mo>-</mo><mn>1</mn></mrow></msup></mrow></mfenced></mrow></semantics></math>, a first-order autoregressive process AR (1) with a mean of 0, a standard deviation of 0.8, and autocorrelation levels (&#x26;#x003c1;) of 0.2, 0.5, and 0.8 with a 95% confidence limit, Note that; <math><semantics><mrow><msub><mrow><mi>e</mi></mrow><mrow><mi>t</mi></mrow></msub><mo>=</mo><mi> </mi><mi>ρ</mi><msub><mrow><mi>ε</mi></mrow><mrow><mi>t</mi><mo>-</mo><mn>1</mn></mrow></msub><mo>+</mo><msub><mrow><mi>v</mi></mrow><mrow><mi>t</mi></mrow></msub></mrow></semantics></math> and <math><semantics><mrow><msub><mrow><mi>v</mi></mrow><mrow><mi>t</mi></mrow></msub><mo>~</mo><mi>N</mi><mo>(</mo><mn>0</mn><mo>,</mo><msup><mrow><mi>σ</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>)</mo></mrow></semantics></math></p>
<title>3.9. Experimental design and data generation</title><p>The experimental design adopted in this study is; </p>
<p>Three-time-series samples (T) of 20, 60, and 100 were considered in the data generation</p>
<p>Three autocorrelation levels were considered, i.e.  &#x26;#x003c1; = 0.2, 0.5 and 0.8</p>
<p>One standard deviation value was considered, i.e. <math><semantics><mrow><mi>σ</mi><mo>=</mo><mi> </mi><mn>0.8</mn></mrow></semantics></math></p>
<p>The dataset was simulated for 1,000 replications in each of the 3 x 3 x 4 x 1 = 36 combinations for cases T&#x26;#x02019;s, &#x26;#x003c1;&#x26;#x02019;s, &#x26;#x003bb;&#x26;#x02019;s, and &#x26;#x003c3;&#x26;#x02019;s.</p>
<p></p>
<p>All the selected parameters the in experimental design are similar to the ones used in [
<xref ref-type="bibr" rid="R33">33</xref>].</p>
<title>3.10. Smoothing Spline Assessment methods used in this Study</title><p>Efforts were made in this study to examine and compare the strength of the four spline smoothing estimators, namely; Generalized Cross-Validation (GCV), Unbiased risk (UBR), Generalized Maximum Likelihood (GML), and the Proposed Smoothing Method (PSM) developed by taking the weighted hybrid of GCV and UBR.</p>
<p><bold>(</bold><bold>i</bold><bold>) Predictive Mean Square Error</bold></p>
<p>A comparison was made to test the four estimation methods' effect and performance in the presence of autocorrelation error. An estimate of the four smoothing methods, the criterion, effect, and performance of different autocorrelation errors of the four estimation methods (i.e. Generalized Crossed Validation (GCV), Generalized Maximum Likelihood (GML), Proposed Smoothing Methods (PSM) (0 &lt; k &lt; 1) and Unbiased Risk (UBR)) were performed using codes written in R-console. Four different estimation methods were used i.e. GCV (V), GML (M), PSM (0 &lt; k &lt; 1), and UBR (U). This data generation was carried out for V, M, P, and U. At the same time, the Evaluation and comparison of the Four (4) Spline Smoothing estimation methods were investigated by applying the asymptotic sampling qualities of the criterion given as; Mean Square Prediction Error (MSPE).</p>
<p>The Predictive Mean squared error (PMSE) of a smoothing curve or model fitting process, according to [
<xref ref-type="bibr" rid="R26">26</xref>] and [
<xref ref-type="bibr" rid="R4">4</xref>], is the difference between the expected value of the square difference of the fitted value, that is; function <math><semantics><mrow><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover><mfenced separators="|"><mrow><msub><mrow><mi>x</mi></mrow><mrow><mi>i</mi></mrow></msub></mrow></mfenced></mrow></semantics></math> and the observed value estimate is given as the function<math><semantics><mrow><mi> </mi><mi>f</mi><mfenced separators="|"><mrow><msub><mrow><mi>x</mi></mrow><mrow><mi>i</mi></mrow></msub></mrow></mfenced></mrow></semantics></math>. It is utilized to estimate the performance and attributes of smoothing methods like Cross-Validation, Generalized Cross-Validation, and Generalized Maximum Likelihood, etc. The Predictive Mean Square Error (PMSE) is written numerically as; </p>

<disp-formula id="FD26"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><mi>P</mi><mi>M</mi><mi>S</mi><mi>E</mi><mfenced separators="|"><mrow><mi>λ</mi></mrow></mfenced><mo>=</mo><mi mathvariant="normal"> </mi><mi>E</mi><mfenced open="[" close="]" separators="|"><mrow><msup><mrow><mrow><munderover><mo stretchy="false">∑</mo><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><mrow><mi>n</mi></mrow></munderover><mrow><mfenced separators="|"><mrow><mi>f</mi><mfenced separators="|"><mrow><msub><mrow><mi>x</mi></mrow><mrow><mi>i</mi></mrow></msub></mrow></mfenced><mo>-</mo><mi mathvariant="normal"> </mi><mover accent="true"><mrow><mi>f</mi></mrow><mo>^</mo></mover><mfenced separators="|"><mrow><msub><mrow><mi>x</mi></mrow><mrow><mi>i</mi></mrow></msub></mrow></mfenced></mrow></mfenced></mrow></mrow></mrow><mrow><mn>2</mn></mrow></msup></mrow></mfenced></mrow></semantics></math></div><div class="l"><label>(26)</label></div></div></disp-formula><p>The Predictive Mean Square Error is usually separated into two parts; the initial part is the sum of square biases of the fitted qualities, and the other part is the number of changes in the fitted observations.</p>
<p>Where;</p>
<p>  = observed value</p>
<p>= predicted or estimated value</p>
<p>At each scenario of specification, for instance, say, time-series size (T) = 20<bold>, </bold>autocorrelation level<bold> (</bold>&#x26;#x003c1;) = 0.2, d.f = 1, and standard deviation (&#x26;#x003c3;) = 0.8, the smoothing methods were tested and compared using the asymptotic properties of the estimators based on the PMSE criterion.</p>
<p><bold>(ii)Test for Over-fitting in Spline Smoothing </bold></p>
<p>In statistics, overfitting occurs when a model fails to fit extra information or neglects to anticipate future perceptions reliably. PRESS and Predicted R-square are the best and easiest ways to discover overfitted in smoothing methods and models. The result may be interpreted by simply comparing the predicted R-Square to the normal R-Square and observing if there exist is a great difference between the two test techniques. If there is a large difference between the two values, the model doesn&#x26;#x02019;t predict new observations and fits the true data, and there is the possibility of overfitting the model. Overfit model has too many numbers and terms and begins to fit the random noise in the sample; it is not possible to predict random noise. The Predictive R-square is a statistical technique that determines how well a model predicts a response for new observation. It is something of an in-house cooked measure, which is computed by effectively eradicating each variable from the data set, estimating the regression model, and deciding how suitable the model forecasts the removed variables. Predictive R-square is usually written mathematically as;</p>

<disp-formula id="FD27"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><mi>P</mi><mi>r</mi><mi>e</mi><mi>d</mi><mo>.</mo><mi mathvariant="normal"> </mi><mi>R</mi><mo>-</mo><mi>S</mi><mi>q</mi><mi>u</mi><mi>a</mi><mi>r</mi><mi>e</mi><mo>=</mo><mfenced separators="|"><mrow><mn>1</mn><mo>-</mo><mfrac><mrow><mi>P</mi><mi>r</mi><mi>e</mi><mi>d</mi><mi>i</mi><mi>c</mi><mi>t</mi><mi>e</mi><mi>d</mi><mi mathvariant="normal"> </mi><mi>r</mi><mi>e</mi><mi>s</mi><mi>i</mi><mi>d</mi><mi>u</mi><mi>a</mi><mi>l</mi><mi mathvariant="normal"> </mi><mi>s</mi><mi>u</mi><mi>m</mi><mi mathvariant="normal"> </mi><mi>o</mi><mi>f</mi><mi mathvariant="normal"> </mi><mi>s</mi><mi>q</mi><mi>u</mi><mi>a</mi><mi>r</mi><mi>e</mi><mi>s</mi><mi mathvariant="normal"> </mi><mfenced separators="|"><mrow><mi>P</mi><mi>R</mi><mi>E</mi><mi>S</mi><mi>S</mi></mrow></mfenced></mrow><mrow><mi>S</mi><mi>u</mi><mi>m</mi><mi mathvariant="normal"> </mi><mi>o</mi><mi>f</mi><mi mathvariant="normal"> </mi><mi>s</mi><mi>q</mi><mi>u</mi><mi>a</mi><mi>r</mi><mi>e</mi><mi mathvariant="normal"> </mi><mi>t</mi><mi>o</mi><mi>t</mi><mi>a</mi><mi>l</mi></mrow></mfrac></mrow></mfenced><mi mathvariant="normal"> </mi><mo>×</mo><mn>100</mn></mrow></semantics></math></div><div class="l"><label>(27)</label></div></div></disp-formula><p>While R-square, also known as the coefficient of determination, can be derived through; </p>

<disp-formula id="FD28"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><mi>R</mi><mo>-</mo><mi>S</mi><mi>q</mi><mi>u</mi><mi>a</mi><mi>r</mi><mi>e</mi><mo>=</mo><mi mathvariant="normal"> </mi><mfenced separators="|"><mrow><mn>1</mn><mo>-</mo><mfrac><mrow><mi>S</mi><mi>u</mi><mi>m</mi><mi mathvariant="normal"> </mi><mi>o</mi><mi>f</mi><mi mathvariant="normal"> </mi><mi>s</mi><mi>q</mi><mi>u</mi><mi>a</mi><mi>r</mi><mi>e</mi><mi>s</mi><mi mathvariant="normal"> </mi><mi>e</mi><mi>r</mi><mi>r</mi><mi>o</mi><mi>r</mi></mrow><mrow><mi>S</mi><mi>u</mi><mi>m</mi><mi mathvariant="normal"> </mi><mi>o</mi><mi>f</mi><mi mathvariant="normal"> </mi><mi>s</mi><mi>q</mi><mi>u</mi><mi>a</mi><mi>r</mi><mi>e</mi><mi mathvariant="normal"> </mi><mi>t</mi><mi>o</mi><mi>t</mi><mi>a</mi><mi>l</mi></mrow></mfrac></mrow></mfenced></mrow></semantics></math></div><div class="l"><label>(28)</label></div></div></disp-formula><p><bold>(iii) Test for Goodness-of-fit for the Smoothing Methods</bold></p>
<p>The goodness-of-fit of the smoothing methods explains how well the methods fit the simulated and real-life data. It also summarizes the differences between the observed value and predicted or estimated values. The Adjusted R-square was used to determine the best-fit smoothing methods. It is written mathematically as;</p>

<disp-formula id="FD29"><div class="html-disp-formula-info"><div class="f"><math display="inline"><semantics><mrow><mi>A</mi><mi>d</mi><mi>j</mi><mi>u</mi><mi>s</mi><mi>t</mi><mi>e</mi><mi>d</mi><mi mathvariant="normal"> </mi><mi>R</mi><mo>-</mo><mi>S</mi><mi>q</mi><mi>u</mi><mi>a</mi><mi>r</mi><mi>e</mi><mo>=</mo><mi mathvariant="normal"> </mi><mfenced separators="|"><mrow><mn>1</mn><mo>-</mo><mfrac><mrow><mfenced separators="|"><mrow><mn>1</mn><mo>-</mo><mi>R</mi><mi>s</mi><mi>q</mi><mi>u</mi><mi>a</mi><mi>r</mi><mi>e</mi></mrow></mfenced><mo>×</mo><mfenced separators="|"><mrow><mi>n</mi><mo>-</mo><mn>1</mn></mrow></mfenced></mrow><mrow><mi>n</mi><mo>-</mo><mi>p</mi></mrow></mfrac></mrow></mfenced></mrow></semantics></math></div><div class="l"><label>(29)</label></div></div></disp-formula><p>Where; n = several observations and p = many parameters.</p>
</sec><sec id="sec4">
<title>Results and Discussion</title><p>Table 1,Table <xref ref-type="table" rid="tabtable 2">table 2</xref> andTable <xref ref-type="table" rid="tabtable 3">table 3</xref> present the summary fit result of the smoothing spline regression model and the model performance criteria, i.e. the PMSE, multiple R-square, adjusted R-Square, and predicted R-square based on time-series periods <italic>(T=60)</italic>, four degrees of smoothing <italic>(D.S.=1, 2, 3 and 4)</italic> and autocorrelation level <italic>(&#x26;#x003c1; = 0.5).</italic> It was revealed from the result that all the coefficients of the smoothing methods&#x26;#x02019; parameters were significant at <italic>(P-value &lt;0.001, &lt;0.01, and &lt; 0.05).</italic></p>
<p>The PMSE of the four smoothing techniques indicated that; the Proposed Smoothing Method (<italic>PSM = 0.18</italic>) had the smallest PMSE of <italic>0.757980</italic> at <italic>T = 60, D.S. = 2, and &#x26;#x003c1;=0.5</italic>. This was closely followed by, <italic>UBR</italic> with PSME of <italic>1.017353</italic> at <italic>T = 60, D.S. = 2, and &#x26;#x003c1; = 0.5 </italic>then, GML with PSME of <italic>1.300494</italic> at <italic>T = 60, D.S. = 2 </italic>and<math><semantics><mrow><mi>ρ</mi><mo>=</mo><mn>0.5</mn></mrow></semantics></math>. The result implies that; the Proposed Smoothing Method (<italic>PSM = 0.18</italic>) performs better than the other smoothing methods at a time series size <italic>(T =60)</italic> and rho = <italic>0.5</italic>.</p>
<p>The adjusted R-Square result showed that the Proposed Smoothing Method (<italic>PSM = 0.18</italic>) had the largest values of <italic>0.8095</italic> at, <italic>T = 60, D.S. = 2, and &#x26;#x003c1;=0.5</italic>, which is closely followed by the Proposed Smoothing Method <italic>(PSM = 0.20)</italic> with the value of<italic> 0.7879 when T = 20, D.S. = 2 and &#x26;#x003c1;=0.5, </italic>then<italic> </italic>GCV smoothing method with<italic> 0.7828 </italic>at<italic> T = 60, D.S. = 2 and</italic><math><semantics><mrow><mi>ρ</mi><mo>=</mo><mn>0.5</mn></mrow></semantics></math><italic>. </italic>It can be inferred from the result above that; the Proposed Smoothing Method (PSM = 0.18), provides the best fit to the time-series observations at a time-series size <italic>(T = 60)</italic> and rho = 0.5.</p>
<p>It can be seen from the result presented inTable <xref ref-type="table" rid="tab1">1</xref>-3 that the difference between the Multiple R-square and predictive R-square of the Proposed Smoothing Method was the least when compared to the other smoothing methods. At <italic>T = 60, D.S. = 1, 2, 3, and 4,</italic><math><semantics><mrow><mi>ρ</mi><mo>=</mo><mn>0.5</mn><mo>,</mo></mrow></semantics></math><italic> </italic>the differences between the Multiple R-Square and predictive R-square were<italic> 0.3669,</italic> <italic>0.0364,</italic> <italic>0.4599, and 0.1759 </italic>respectively<italic>. </italic>This result shows that the Proposed Smoothing Method does not overfit the time series observations when the time-series size of 60 and rho = 0.5.</p>
<table-wrap id="tab1">
<label>Table 1</label>
<caption>
<p><b></b><b> Simulation Result for Smoothing Spline Regression Model of GML, GCV, PSM, and UBR for T=20, and &#x003c1; = 0.2, 0.5</b></p>
</caption>
<table>
<tr>
<td><graphic xlink:href="618.table.001" /></td>
</tr>
</table>
<table-wrap-foot>
<fn>

</fn>
</table-wrap-foot>
</table-wrap>
<table-wrap id="tab2">
<label>Table 2</label>
<caption>
<p><b>Table 2</b><b>. </b><b>Simulation Result for Cubic Spline Regression Model of GML, GCV, PSM</b><b>,`</b><b> and UBR for T=60, and &#x003c1; = 0.2</b></p>
</caption>
<table>
<tr>
<td><graphic xlink:href="618.table.002" /></td>
</tr>
</table>
<table-wrap-foot>
<fn>

</fn>
</table-wrap-foot>
</table-wrap><table-wrap id="tab3">
<label>Table 3</label>
<caption>
<p><b></b><b> Simulation Result for Cubic Spline Regression Model of GML, GCV, PSM, and UBR for T=100, and &#x003c1; = 0.2</b></p>
</caption>
<table>
<tr>
<td>
<graphic xlink:href="618.table.003" />
</td>
</tr>
</table>
<table-wrap-foot>
<fn>

</fn>
</table-wrap-foot>
</table-wrap><p></p>
<p>Figure 1,Figure <xref ref-type="fig" rid="figfigure 2"> figure 2</xref> andFigure <xref ref-type="fig" rid="figfigure 3"> figure 3</xref><bold> </bold>below clearly show the comparisons of the behaviors of the cubic smoothing spline selected by GCV, GML, and MCP for sample sizes 20, 60, and 100 respectively. It was observed that the observed value of Generalized Cross-Validation was closer to the fitted/estimated value when compared to the </p>
<fig id="fig1">
<label>Figure 1</label>
<caption>
<p>Cubic smoothing spline and fitted curve using GCV, GML, UBR, and PSM, <i>(K=0.04) for </i><i>T = 20, rho = 0.2, 0.5, and 0.8, sigma = 0.8</i></p>
</caption>
<graphic xlink:href="618.fig.001" />
</fig><fig id="fig2">
<label>Figure 2</label>
<caption>
<p>Cubic smoothing spline and fitted curve using GCV, GML, UBR, and PSM, <i>(K=0.04) for </i><i>T = 60, rho = 0.2, 0.5, and 0.8, sigma = 0.8</i></p>
</caption>
<graphic xlink:href="618.fig.002" />
</fig><fig id="fig3">
<label>Figure 3</label>
<caption>
<p>Cubic smoothing spline and fitted curve using GCV, GML, UBR, and PSM, <i>(K=0.04) for </i><i>T = 100, rho = 0.2, 0.5, and 0.8, sigma = 0.8</i></p>
</caption>
<graphic xlink:href="618.fig.003" />
</fig></sec><sec id="sec5">
<title>Application</title><table-wrap id="tab4">
<label>Table 4</label>
<caption>
<p><b>Table 4</b><b>.</b><b> Test for autocorrelation for the real-life data in the presence of Autocorrelation </b></p>
</caption>
<table>
<tr>
<td><graphic xlink:href="618.table.004" /></td>
</tr>
</table>
<table-wrap-foot>
<fn>

</fn>
</table-wrap-foot>
</table-wrap><p></p>
<p>H<sub>0</sub>: The data are independently distributed, or the correlations in the population from which the samples are drawn are zero</p>
<p>H<sub>I</sub>: The data are not independently distributed; they have serial correlation</p>
<p><bold>Decision:</bold> autocorrelation exists in the model</p>
<table-wrap id="tab5">
<label>Table 5</label>
<caption>
<p><b>Table 5</b><b>. </b><b>Stationarity</b><b> test for the real-life data with smoothing parameters </b></p>
</caption>
<table>
<tr>
<td><graphic xlink:href="618.table.005" /></td>
</tr>
</table>
<table-wrap-foot>
<fn>

</fn>
</table-wrap-foot>
</table-wrap><p></p>
<p>H<sub>0</sub>: The observation is not stationary, there exists a unit root</p>
<p>H<sub>I</sub>: The observation is stationary, there is no unit root</p>
<p><bold>Decision: </bold>The data is stationary, there is no unit root</p>
<table-wrap id="tab6">
<label>Table 6</label>
<caption>
<p><b>Table 6</b><b>.</b><b> Cubic Smoothing Spline regression and Predictive Mean Square Error Result for Real-life data </b></p>
</caption>
<table>
<tr>
<td>
<graphic xlink:href="618.table.006" />
</td>
</tr>
</table>
<table-wrap-foot>
<fn>

</fn>
</table-wrap-foot>
</table-wrap><p></p>
<p>Table 6 above presents the predictive mean square error of the real-life data on the standard international trade classification (SITC) export and import price indices in Nigeria between 2001 and 2020. It was discovered that the proposed smoothing Method (PSM) had the least predictive mean square error (PMSE) a confirmation that it is the preferred smoothing method for simulated and real-life data. The result also presented the multiple, adjusted, and predictive R-Square. It can be inferred from the adjusted R-square of the proposed smoothing method of 59.6% that it has the best fit among the four smoothing methods.</p>
<p>The plot above presents the smoothing curve of the annual standard international trade classification import price index dataset in Nigeria from 1970-2020. The data used for analysis were earlier tested for stationarity and autocorrelation. As can be seen fromFigure <xref ref-type="fig" rid="fig4"> 4</xref> the proposed smoothing method with optimal smoothing parameter &#x26;#x003bb; = 0.062439908 and for weighted value (k =0.04) was used to carefully analyze the residuals to try to detect disturbances or errors in the stationary part of the series. It was observed that the PSM curve is very close to the real-life data and also provides a good fit. </p>
<fig id="fig4">
<label>Figure 4</label>
<caption>
<p>Smoothing curve of SITC import and Export price index dataset (dark blue) and fitted value (red) with Smoothing Parameters Chosen by PSM (0.04).</p>
</caption>
<graphic xlink:href="618.fig.004" />
</fig><p>The plot above presents the smoothing curve of the annual standard international trade classification export price index dataset in Nigeria from 1970-2018. The curve presented inFigure <xref ref-type="fig" rid="fig4"> 4</xref> indicated that the proposed smoothing method with optimal smoothing parameter &#x26;#x003bb; = 0.062439908 and for weighted value (k =0.04) was used to smooth the residuals for disturbances or errors in the stationary part of the series. It was observed that the PSM curve is very close to the real-life data and also provides a good fit.</p>
</sec><sec id="sec6">
<title>Conclusion</title><p>The results generated from the simulation and real-life data conducted in this study have provided great insight into the smoothing method whose model produces the best fit for the time-series observations, the model whose smoothing method does not over fit data, the optimum value of the proposed smoothing method and the performance of the smoothing methods when autocorrelation is present in the error term.</p>
<p>The result of the goodness-of-fit test revealed that the proposed smoothing method had the best-fit model among the competing smoothing methods on the simulated and real-life data. The proposed smoothing method&#x26;#x02019;s model fitted without any defection and shortcoming under cubic spline functional form with the highest adjusted R-Square of 0.9618, at T = 20, D.S. = 4, <math><semantics><mrow><mi>ρ</mi><mo>=</mo><mn>0.2</mn><mo>,</mo></mrow></semantics></math> and the weight value of <italic>k</italic> = 0.04.</p>
<p>The finding on the effect of autocorrelation in the error terms of the four smoothing methods considered in this study showed that the proposed smoothing method (PSM) works well for all levels of autocorrelation <italic>(&#x26;#x003c1; = 0.2, 0.5, and 0.8)</italic>. It also provided a better estimate,  proved to be the most preferred smoothing method than the GML, GCV, and UBR, and does not overfit a time series observation with autocorrelation in the error term at a predicted R-square<italic> </italic>of<italic> 0.6218</italic>. This result is slightly similar for GML with; [
<xref ref-type="bibr" rid="R25">25</xref>] and [
<xref ref-type="bibr" rid="R35">35</xref>] but differs from [
<xref ref-type="bibr" rid="R24">24</xref>,<xref ref-type="bibr" rid="R36">36</xref>,<xref ref-type="bibr" rid="R37">37</xref>,<xref ref-type="bibr" rid="R38">38</xref>,<xref ref-type="bibr" rid="R39">39</xref>]. </p>
<p>The study on the optimum value of the Proposed Smoothing Method (PSM) indicated that the smoothing method performs at an optimal level when (k = 0.04) with a predictive mean square error value of 0.046857, multiple R-Square of 0.9678, Adjusted R-Square of 0.9618 and predictive R-square of 0.6428.</p>
<p>The result on the effect of sample size on the performance of the four smoothing methods shows that the proposed smoothing method is computationally more efficient and consistent, and works well at all sample sizes (T = 20, 60, and 100) Monte-Carlo experiment. The plots and results presented in Tables andFigure <xref ref-type="fig" rid="fig1"> 1</xref> &#x26;#x02013; 4 indicated that; GML, GCV, and UBR showed signs of inefficiency for all the time series sizes (T = 20, 60, and 100). This finding is quite different from; [
<xref ref-type="bibr" rid="R25">25</xref>,<xref ref-type="bibr" rid="R35">35</xref>] and [
<xref ref-type="bibr" rid="R40">40</xref>]. </p>
<p>The findings from the result also proved the Proposed Smoothing Method (PSM) to be more efficient among the four competing smoothing methods for real-life data. This result disagrees with the finding by [
<xref ref-type="bibr" rid="R41">41</xref>].</p>
<p></p>
<p><bold>Conflicts of Interest</bold>: The authors declare no conflict of interest.</p>
<p></p>
<p></p>
</sec>
  </body>
  <back>
    <ref-list>
      <title>References</title>
      
<ref id="R1">
<label>[1]</label>
<mixed-citation publication-type="other">Wahba, G. &#x00026; Wang, Y. (1993). Behavior near Zero of the Distribution of GCV Smoothing Parameter Estimates for Splines. Statistics and Probability Letters, (25), 105 - 111.
</mixed-citation>
</ref>
<ref id="R2">
<label>[2]</label>
<mixed-citation publication-type="other">Aydin, D. &#x00026; Memmedli, M. (2011). Optimum Smoothing Parameter Selection for Penalized Least-squares in the Form of Lin-ear Mixed Effect Models. Optimization, iFirst: 1-18.
</mixed-citation>
</ref>
<ref id="R3">
<label>[3]</label>
<mixed-citation publication-type="other">Chen, C.S. &#x00026; Huang, H.C. (2011). An improved Cp Criterion for Spline Smoothing. Journal of Statistical Planning and Inference, 144(1), 445 - 471.
</mixed-citation>
</ref>
<ref id="R4">
<label>[4]</label>
<mixed-citation publication-type="other">Adams, S.O., Ipinyomi, R.A. (2019). A Proposed Spline Smoothing Estimation Method for Time Series Observations. Interna-tional Journal of Mathematics and Statistics Invention (IJMSI), 07(02), 18-25.
</mixed-citation>
</ref>
<ref id="R5">
<label>[5]</label>
<mixed-citation publication-type="other">Aydin, D., M. Memmedli, and R. E. Omay. (2013). Smoothing Parameter Selection for Nonparametric Regression using a Smoothing Spline. European Journal of Pure and Applied Mathematics, 6, 222-38.
</mixed-citation>
</ref>
<ref id="R6">
<label>[6]</label>
<mixed-citation publication-type="other">Jansen, Maarten (2015). Generalized Cross Validation in Variable Selection with and Without Shrinkage. Journal of Statistical Planning and Inference, 159, 90-104. https://doi.org/10.1016/j.jsp.2014.10.007
</mixed-citation>
</ref>
<ref id="R7">
<label>[7]</label>
<mixed-citation publication-type="other">Lukas, M.A., De Hoog, F.R. &#x00026; Anderssen R.S. (2016). Practical Use of Robust GCV and Modified GCV for Spline Smoothing. Computational Statistics. 31(1), 269-289.
</mixed-citation>
</ref>
<ref id="R8">
<label>[8]</label>
<mixed-citation publication-type="other">Adams, S.O. (2021). An Improved Spline Smoothing Method for Estimation in the Presence of Autocorrelation Errors. The University of Ilorin.
</mixed-citation>
</ref>
<ref id="R9">
<label>[9]</label>
<mixed-citation publication-type="other">Adams, S.O., Balogun, P.O. (2020). Panel Data Analysis on Corporate Effective Tax Rates of Some Listed Large Firms in Ni-geria. Dutch Journal of Finance and Management, 4(2), 1-9, 2542-4750. https://doi.org/10.21601/djfm/9345
</mixed-citation>
</ref>
<ref id="R10">
<label>[10]</label>
<mixed-citation publication-type="other">Devi, A.R., Budiantara, I.N. &#x00026; Vita-Ratnasari, V. (2018). Unbiased Risk and Cross-Validation Method for Selecting Optimal Knots in Multivariable Nonparametric Regression Spline Truncated (Case Study: The Unemployment Rate in Central Java, Indonesia, 2015). AIP Conference Proceedings 2021.
</mixed-citation>
</ref>
<ref id="R11">
<label>[11]</label>
<mixed-citation publication-type="other">Adams, S.O., Ipinyomi, R.O., Yahaya, H.U. (2017). Smoothing Spline of ARMA Observations in the Presence Autocorrelation Error. European Journal of Statistics and Probability, 5(1), 1-8.
</mixed-citation>
</ref>
<ref id="R12">
<label>[12]</label>
<mixed-citation publication-type="other">Adams, S.O., Yahaya, H.U., Nasiru, M.O. (2017). Smoothing Parameter Estimation of the Generalized Cross Validation and Generalized Maximum Likelihood. IOSR Journal of Mathematics, 13(1), 41 - 44. https://doi:10.9790/5728-1301054144
</mixed-citation>
</ref>
<ref id="R13">
<label>[13]</label>
<mixed-citation publication-type="other">Xu, L. &#x00026; Zhou, J. (2019). A Model-Averaging Approach for Smoothing Spline Regression. Communications in Statistics - Simu-lation and Computation. 48(8), 2438 - 2451.
</mixed-citation>
</ref>
<ref id="R14">
<label>[14]</label>
<mixed-citation publication-type="other">Wahba, G. (1977). Applications of Statistics in P. Krishnaiah Edition, A Survey of Some Smoothing Problems and the Method of Generalized Cross-Validation for solving them, Northern Holland, Amsterdam.
</mixed-citation>
</ref>
<ref id="R15">
<label>[15]</label>
<mixed-citation publication-type="other">Wang, H., Meyer M. C. &#x00026; Opsomer J.D. (2013). Constrained Spline Regression in the Presence of AR(p) Errors. Journal of Nonparametric Statistics, 25, 809 - 827.
</mixed-citation>
</ref>
<ref id="R16">
<label>[16]</label>
<mixed-citation publication-type="other">Cook R.D. &#x00026; S. Weisberg, S. (1982). Residuals and Influence in Regression, Journal of the American Statistical Association, 86, 328-332.
</mixed-citation>
</ref>
<ref id="R17">
<label>[17]</label>
<mixed-citation publication-type="other">Craven P. &#x00026; Wahba, G. (1979). Smoothing Noisy Data with Spline Functions, Numerical Mathematics, 31, 377 - 403.
</mixed-citation>
</ref>
<ref id="R18">
<label>[18]</label>
<mixed-citation publication-type="other">Xiang, D. &#x00026; Wahba, G. (1998). Approximate Smoothing Spline Methods for Large Data Sets in the Binary Case. Proceedings of the 1997 ASA Joint Statistical Meetings, Biometrics Section 94-98.
</mixed-citation>
</ref>
<ref id="R19">
<label>[19]</label>
<mixed-citation publication-type="other">Wahba, G. (1990). Spline Models for Observational Data. CBMS-NSF Regional Conference Series in Applied Mathematics, Phila-delphia: SIAM 59.
</mixed-citation>
</ref>
<ref id="R20">
<label>[20]</label>
<mixed-citation publication-type="other">Wahba, G. (1975). Optimal Convergence Properties of Variable Knot Kernel and Orthogonal Series Methods for Density Es-timation, Annals of Statistics, 3, 15 - 29.
</mixed-citation>
</ref>
<ref id="R21">
<label>[21]</label>
<mixed-citation publication-type="other">Wahba, G. (1980). Automatic Smoothing of the Log Periodogram. Journal of the American Statistical Association, 75, 122-132.
</mixed-citation>
</ref>
<ref id="R22">
<label>[22]</label>
<mixed-citation publication-type="other">Silverman, B.W. (1984). Spline Smoothing: The Equivalent Variable Kernel Method. Annals of Statistics, 12(3), 898 - 916.
</mixed-citation>
</ref>
<ref id="R23">
<label>[23]</label>
<mixed-citation publication-type="other">Wahba, G. (1983). Bayesian Confidence Intervals for the Cross-Validated Smoothing Spline. Journal of Royal Statistical Society (Series B), 45, 133-150.
</mixed-citation>
</ref>
<ref id="R24">
<label>[24]</label>
<mixed-citation publication-type="other">Diggle, P.J. &#x00026; Hutchinson, M.F (1998). On Spline Smoothing with Autocorrelated Errors. Australian Journal of Statistics, 31, 166 -182.
</mixed-citation>
</ref>
<ref id="R25">
<label>[25]</label>
<mixed-citation publication-type="other">Yuedong, W. (1998). Smoothing Spline Models with Correlated Random Errors. Journal of American Statistical Association. 93(441), 341 - 348.
</mixed-citation>
</ref>
<ref id="R26">
<label>[26]</label>
<mixed-citation publication-type="other">Mallows, C.L. (1973). Some Comments on Cp, Technometrics, 15(4), 661 - 675.
</mixed-citation>
</ref>
<ref id="R27">
<label>[27]</label>
<mixed-citation publication-type="other">Wahba, G. (1979). Convergence Rates of Thin Plate Smoothing Splines when the Data are Noisy in T. Gasser and M. Rosen-blatt. Smoothing Techniques for Curve Estimation, Springer-Verlag, New York.
</mixed-citation>
</ref>
<ref id="R28">
<label>[28]</label>
<mixed-citation publication-type="other">Adams, S.O., Gayawan, E., Garba, M.K. (2009). Empirical Comparison of the Kruskal - Wallis Statistics and its Parametric Counterpart. Journal of Modern Mathematics and Statistics, 3(2), 38 - 42. Medwell Journal. https://doi:jmmstat.2009.38.42.
</mixed-citation>
</ref>
<ref id="R29">
<label>[29]</label>
<mixed-citation publication-type="other">Adams, S.O., Ipinyomi, R.A. (2019). A New Smoothing Method for Time Series Data in the Presence of Autocorrelated Error. Asian Journal of Probability and Statistics (AJPAS), 04(04), 1-19. https://doi.org/10.9734/ajpas/2019/v4i430121
</mixed-citation>
</ref>
<ref id="R30">
<label>[30]</label>
<mixed-citation publication-type="other">Adams, S.O., Ipinyomi, R.A. (2020). On the Efficiency of the Weighted Generalized Cross Validation and Unbiased Risk Smoothing Method for Time Series Observations with Autocorrelated Error. International Journal of Academic and Applied Re-search, 04(07), 70-81.
</mixed-citation>
</ref>
<ref id="R31">
<label>[31]</label>
<mixed-citation publication-type="other">Adams, S.O., Yahaya, H.U. (2020). Comparative Study of GCV-MCP Hybrid Smoothing Methods for Predicting Time Series Observations. American Journal of Theoretical and Applied Statistics, 9(5), 219-227. https://doi:10.11648/j.ajtas.20200905.15
</mixed-citation>
</ref>
<ref id="R32">
<label>[32]</label>
<mixed-citation publication-type="other">Adams, S.O., Obaromi, A.D, Alumbugu, A.I. (2021). The goodness of Fit test of an Autocorrelated Time Series Cubic Smoothing Spline Model. Journal of the Nigerian Society of Physical Sciences. 3(3), 191-200. https://doi.org/10.46481/jnsps.2021.265.
</mixed-citation>
</ref>
<ref id="R33">
<label>[33]</label>
<mixed-citation publication-type="other">Wahba, G. (1985). A Comparison of GCV and GML for Choosing the Smoothing Parameters in the Generalized Spline Smoothing Problem. The Annals of Statistics 4, 1378 - 1402.
</mixed-citation>
</ref>
<ref id="R34">
<label>[34]</label>
<mixed-citation publication-type="other">Daniel, C. (1973). One at a time Plans. Journal of American Statistical Association, 68 (342), 353 - 360.
</mixed-citation>
</ref>
<ref id="R35">
<label>[35]</label>
<mixed-citation publication-type="other">Yuedong, W., Wensheng G. &#x00026; Brown, M.B. (2000). Spline Smoothing for Bivariate Data with Application to Association between Hormones, Statistica Sinica, 10, 377 - 397.
</mixed-citation>
</ref>
<ref id="R36">
<label>[36]</label>
<mixed-citation publication-type="other">Hart, J. D. &#x00026; Wehrly, T. E. (1986). Kernel Regression Estimation using Repeated Measurements Data. Journal of the American Statistical Association, 81, 1080 - 1088.
</mixed-citation>
</ref>
<ref id="R37">
<label>[37]</label>
<mixed-citation publication-type="other">Altman, N.S. (1990). Kernel Smoothing of Data with Correlated Errors. Journal of the American Statistical Association. 85, 749-759.
</mixed-citation>
</ref>
<ref id="R38">
<label>[38]</label>
<mixed-citation publication-type="other">Herrmann, E., Gasser, T. &#x00026; Kneip, A. (1992). Choice of Bandwidth for Kernel Regression When Residuals are Correlated. Biometrika, 79(4), 783-795.
</mixed-citation>
</ref>
<ref id="R39">
<label>[39]</label>
<mixed-citation publication-type="other">Krivobokova T. &#x00026; Kauermann, G. (2007). A Note on Penalized Spline Smoothing with Correlated Errors. Journal of the Amer-ican Statistical Association, 102, 1328 - 1337.
</mixed-citation>
</ref>
<ref id="R40">
<label>[40]</label>
<mixed-citation publication-type="other">Kim, T., Park, B., Moon, M. &#x00026; Kim, C. (2009). Using Bimodal Kernel for Inference in Nonparametric Regression with Corre-lated Errors. Journal of Multivariate Analysis, 100 (7), 1487 - 1497.
</mixed-citation>
</ref>
<ref id="R41">
<label>[41]</label>
<mixed-citation publication-type="other">Carew, J. D., Wahba, G., Xie, X., Nordheim, E.V. &#x00026; Meyerand M. E. (2003). Optimal Spline Smoothing of fMRI Time Series by Generalized Cross-Validation. NeuroImage, 18(4), 950 - 961.
</mixed-citation>
</ref>
    </ref-list>
  </back>
</article>