implemented feedback and changes
This commit is contained in:
@ -9,6 +9,18 @@ The analysis of volume flow can be broken down into two fundamental operations t
|
||||
\end{itemize}
|
||||
|
||||
\subsection{Cross-Sectional Area}
|
||||
|
||||
The methodology used in order to analyze the cross-sectional area of the material flow is \textbf{geometric analysis}. Simply put, the geometry of a laden belt is compared with that of an empty belt. The resulting difference in area is that of the material itself.
|
||||
|
||||
In order to accomplish this analysis, a horizontal slice of the sensor data is used---see \autoref{fig:conveyor_top}. The slice represents the depth data of a single dimension, in this case, the crosswise dimension of the belt.
|
||||
|
||||
\begin{figure}[h]
|
||||
\centering
|
||||
\includegraphics[width=0.75\textwidth]{design/conveyor_top}
|
||||
\caption{Graphical depiction of the LIDAR sensor image. The slice is a one-dimensional extract of the sensor image crosswise over the belt.}
|
||||
\label{fig:conveyor_top}
|
||||
\end{figure}
|
||||
|
||||
\begin{figure}[h]
|
||||
\centering
|
||||
\includegraphics[width=0.6\textwidth]{design/cross_analysis_new}
|
||||
@ -16,19 +28,14 @@ The analysis of volume flow can be broken down into two fundamental operations t
|
||||
\label{fig:cross_analysis}
|
||||
\end{figure}
|
||||
|
||||
The methodology used in order to analyze the cross-sectional area of the material flow is \textbf{geometric analysis}. Simply put, the geometry of a laden belt is compared with that of an empty belt. The resulting difference in area is that of the material itself.
|
||||
|
||||
In order to accomplish this analysis, a horizontal slice of the sensor data is used. The slice represents the depth data of a single dimension, in this case, the crosswise dimension of the belt.
|
||||
|
||||
During calibration, the empty belt is used to fit the polynomial belt curve $f(x)$. The fitting of this nth-degree polynomial is done with the least-squares method.
|
||||
|
||||
After calibration, the current slice curve $g(x)$ can be used to obtain the Cross-Sectional Area $A_C$ as shown in \autoref{eq:cross_area} and \autoref{fig:cross_analysis}.
|
||||
|
||||
\begin{equation}
|
||||
A_C = \int^{x_b}_{x_a}\left[ g(x) - f(x) \right] dx \label{eq:cross_area}
|
||||
\end{equation}
|
||||
|
||||
|
||||
After calibration, the current slice curve $g(x)$ can be used to obtain the Cross-Sectional Area as shown in \autoref{eq:cross_area}.
|
||||
|
||||
\subsubsection{Further Considerations}
|
||||
The accuracy of the computed cross-sectional area depends primarily on the accuracy of the depth data as well the frame rate of the sensor.
|
||||
|
||||
@ -40,12 +47,6 @@ However, further operations may be implemented in order to increase accuracy, su
|
||||
|
||||
It is important to note though, that the implementation of further operations may exhaust the processing capabilities of the platform. Therefore, a crucial balance must be struck between performance and accuracy.
|
||||
|
||||
\begin{figure}[h]
|
||||
\centering
|
||||
\includegraphics[width=0.75\textwidth]{design/conveyor_top}
|
||||
\caption{Graphical depiction of the LIDAR sensor image. The slice is a one-dimensional extract of the sensor image crosswise over the belt.}
|
||||
\end{figure}
|
||||
|
||||
\subsection{Belt Velocity}
|
||||
Conventional belt scales use some form of a rotary encoder in order to measure the belt velocity. This is---however accurate---only an approximation of the velocity of the material flow itself, since material velocity may deviate from belt velocity depending on environmental or material conditions.
|
||||
|
||||
@ -55,15 +56,27 @@ The fundamental operation being used in the following methods in order to determ
|
||||
|
||||
In the case of this project, given the known interval between two consecutive signals---i.e.\ the frame rate---it is possible to express this delay in the form of a physical displacement, in meters.
|
||||
|
||||
The different analytic methods used in this project differ only by which data is selected to represent the signal during cross-correlation. The algorithm of calculating the cross-correlation itself remains the same.
|
||||
The various analytic methods used in this project differ only by which data is selected to represent the signal during cross-correlation. The algorithm of calculating the cross-correlation itself remains the same.
|
||||
|
||||
Equation \ref{eq:cross_corr} shows how the cross-correlation $r$ between two signals $A$ and $B$ of lengths $n$ may be calculated by multiplying each element $i$ together. This is done for each possible delay $d$ value. The maximum value of the correlation $r$ corresponds to the most likely value of $d$.
|
||||
|
||||
\begin{equation}
|
||||
r(d) = \sum_{i=1}^{n} A_i \cdot B_{i+d} \label{eq:cross_corr}
|
||||
\end{equation}
|
||||
|
||||
Equation \ref{eq:cross_corr} shows how the cross-correlation $r$ between two signals $A$ and $B$ of lengths $n$ may be calculated by multiplying each element $i$ together. This is done for each possible delay $d$ value. The maximum value of the correlation $r$ corresponds to the most likely value of $d$.
|
||||
|
||||
\subsubsection{Chosen Method - Statistical Method}
|
||||
This method was developed as an aggregate of the previously attempted methods, improving on and solving issues earlier iterations had. It is therefore simply the most successful iteration.
|
||||
|
||||
The statistical method carries out the following operations:
|
||||
\begin{enumerate}
|
||||
\item A user-provided area of interest is cropped out of the entire sensor frame. This is done to isolate only the most relevant and data-dense regions, as well as to eliminate error from static elements as much as possible.
|
||||
\item This subset of the frame is then divided into one-dimensional vertical strips.
|
||||
\item For each of the strips, the cross-correlation displacement is calculated.
|
||||
\item With the of displacement values for each strip, statistical outlier values are removed and a mean displacement is calculated.
|
||||
\item This mean displacement in pixels, together with camera frame geometry, is used to calculate the physical displacement in meters.
|
||||
\end{enumerate}
|
||||
|
||||
The use of this statistical approach using multiple vertical strips---see \autoref{fig:conveyor_xcorr}---is very similar to directly using 2-dimensional correlation, however it attempts to solve a significant problem with the 2-dimensional cross-correlation method.
|
||||
|
||||
\begin{figure}[h]
|
||||
\centering
|
||||
@ -72,19 +85,6 @@ Equation \ref{eq:cross_corr} shows how the cross-correlation $r$ between two sig
|
||||
\label{fig:conveyor_xcorr}
|
||||
\end{figure}
|
||||
|
||||
This method was developed as an aggregate of the previously attempted methods, improving on and solving issues earlier iterations had. It is therefore simply the most successful iteration.
|
||||
|
||||
The statistical method carries out the following operations:
|
||||
\begin{enumerate}
|
||||
\item A user-provided area of interest is cropped out of the entire sensor frame. This is done to isolate only the most relevant and data-dense regions, as well as to eliminate error from static elements as much as possible.
|
||||
\item This subset of the frame is then divided into one-dimensional vertical strips
|
||||
\item For each of the strips, the cross-correlation displacement is calculated.
|
||||
\item With the of displacement values for each strip, statistical outlier values are removed and a mean displacement is calculated
|
||||
\item This mean displacement in pixels, together with camera frame geometry, is used to calculate the physical displacement in meters.
|
||||
\end{enumerate}
|
||||
|
||||
The use of this statistical approach using multiple vertical strips---see \autoref{fig:conveyor_xcorr}---is very similar to directly using 2-dimensional correlation, however it attempts to solve a significant problem with the 2-dimensional cross-correlation method.
|
||||
|
||||
Since a 2-dimension cross-correlation would simultaneously consider the entire area of interest, any static elements in the frame would highly influence the results of the correlation, causing it to always be close to zero. This introduces a high error and variability in the result.
|
||||
|
||||
This statistical approach allows us to discard outlier values---such as values close to zero---and retain only those slices which do not contain any static elements.
|
||||
@ -95,7 +95,7 @@ Before arriving at the statistical method described above, multiple iterations o
|
||||
|
||||
Firstly, as already mentioned above, the \textbf{2-dimensional cross-correlation}. This method produces robust values and is less computationally complex than the statistical approach, however it is significantly more sensitive to static elements. This introduces many challenges since static elements may not be entirely avoided, either on the belt, or on the sensor itself.
|
||||
|
||||
The other alternative method called the \textbf{topographical method} is much less computationally expensive, since it only runs one cycle of the cross-correlation algorithm per frame.
|
||||
The other alternative method, called the \textbf{topographical method}, is much less computationally expensive, since it only runs one cycle of the cross-correlation algorithm per frame.
|
||||
|
||||
The topographical approach works in the following manner. The values within each \textbf{crosswise} slice are summed. This reduces the 2-dimensional sensor data into a 1-dimensional representation which is called the \textit{topography}. This topography can be used as the signal for cross-correlation.
|
||||
|
||||
@ -114,8 +114,22 @@ The volume of material that has passed the sensor per frame $V_F$, can be calcul
|
||||
% \frac{1}{f} \label{eq:volavg}
|
||||
\end{equation}
|
||||
|
||||
\subsection{Accuracy}
|
||||
|
||||
\section{Phases of Development}
|
||||
The accuracy of the system is limited primarily by the framerate of the sensor.
|
||||
|
||||
Figure \ref{eq:accuracyupperlimit} shows the theoretical maximum accuracy for a simple single line-scanner method of determining volume. Thus, for the targeted framerate of 30 FPS, the theoretical maximum accuracy of this method is limited to \SI{3.3}{\percent}.
|
||||
|
||||
\begin{equation}\label{eq:accuracyupperlimit}
|
||||
\text{Accuracy Upper Limit} = \frac{1}{f} \cdot \SI{100}{\percent}
|
||||
\end{equation}
|
||||
|
||||
Introducing multiple line-scans per frame would proportionally reduce this upper limit, at the cost of increased computational complexity.
|
||||
|
||||
The various manufacturers of conventional belt scales have claims of accuracy between \SI{0.5}{\percent} and \SI{2}{\percent}.
|
||||
|
||||
|
||||
\section{Phases of Development}\label{sec:developmentphases}
|
||||
The following phases of development are not grouped chronologically over the span of the project schedule, rather into conceptual groups.
|
||||
|
||||
\subsection{Preparing Development and Build Environment}
|
||||
@ -153,7 +167,7 @@ In the case of this project, this means that the local processor can process and
|
||||
The Qt GUI framework was used in order to create a GUI for the remote controller. This allowed for the sensor data to be more easily calibrated and aligned, as well as providing a consistent interface for end-user configuration. Qt was chosen for its ease of use, as well as its ability to be compiled cross-platform\cite{qtWebsite}.
|
||||
|
||||
\subsection{Development of Main Functionality}
|
||||
At this stage of the design process, the functionality that is fundamental to the principle operation described earlier are developed. These functions include:
|
||||
At this stage of the design process, the functionality that is fundamental to the principle operation described earlier was developed. These functions include:
|
||||
\begin{itemize}
|
||||
\item Transmission of raw sensor data
|
||||
\item Calibration of sensor data
|
||||
@ -186,7 +200,16 @@ See \autoref{chap:validation} for elaboration and results of each of the testing
|
||||
|
||||
\section{Components}\label{sec:componenets}
|
||||
|
||||
As already touched upon in \autoref{sec:aims}, the components used in this project were chosen mainly for their commercial availability and low cost. This section will elaborate more on the decision to select these specific components.
|
||||
|
||||
The cost for each of these components are listed at the end of this section in \autoref{table:cost}.
|
||||
|
||||
\textbf{Raspberry Pi 4 Model B}\nopagebreak
|
||||
|
||||
The Raspberry Pi was chosen as the computation platform primarily for its widespread use in IoT and IIoT, low cost and commercial availability. It also supports the Linux kernel and operating system which greatly eases the software development and deployment process.
|
||||
|
||||
As shown in \autoref{tab:rpi}, the Quad-Core ARM processor as well as 2GB memory capacity provide ample performance for the intended computation. The wireless networking capability of the Raspberry Pi makes it an ideal candidate for an IoT product.
|
||||
|
||||
\begin{table}[H]
|
||||
\centering
|
||||
\begin{tabularx}{0.75\textwidth}{| c | >{\centering\arraybackslash}X |}
|
||||
@ -209,10 +232,16 @@ Operational Temperature & \SI{0}{\celsius} to \SI{50}{\celsius} \\
|
||||
\hline
|
||||
\end{tabularx}
|
||||
\caption{The relevant technical specifications of the Raspberry Pi 4 Model B used in this project\cite{rpiSpecs}.}
|
||||
\label{tab:rpi}
|
||||
\end{table}
|
||||
|
||||
|
||||
\textbf{Intel RealSense L515}\nopagebreak
|
||||
|
||||
The Intel Realsense L515 was also chosen primarily for its low cost. However, the open-source and Linux-friendly nature of Intel's RealSense SDK also make it an ideal choice to pair with the Raspberry Pi. The small form-factor of the sensor would also allow a final product size that would be compact and easy to install.
|
||||
|
||||
\autoref{table:l515} provides an overview of the specifications of the RealSense L515 sensor.
|
||||
|
||||
\begin{table}[H]
|
||||
\centering
|
||||
\begin{tabularx}{0.75\textwidth}{ >{\centering\arraybackslash}X | >{\centering\arraybackslash}X | >{\centering\arraybackslash}X |}
|
||||
@ -242,9 +271,17 @@ Operational Temperature & \SI{0}{\celsius} to \SI{50}{\celsius} \\
|
||||
\hline
|
||||
\end{tabularx}
|
||||
\caption{The relevant technical specifications of the Intel RealSense L515 used in this project\cite{realsenseDatasheet}.}
|
||||
\label{table:l515}
|
||||
\end{table}
|
||||
|
||||
\textbf{netHAT}\nopagebreak
|
||||
|
||||
The netHAT by Hilscher provides a simple-to-use Industrial Ethernet interface for the Raspberry Pi. Through the Raspberry Pi HAT standard, the netHAT is easily installed on the GPIO pins of the Raspberry Pi.
|
||||
|
||||
The drivers---called CIFX---and API library provided--called libCIFX---provide a simple way to interface with Industrial Ethernet networks from software. Section \ref{sec:softarch} shows how CIFX was integrated into the rest of the software architecture.
|
||||
|
||||
\autoref{table:nethat} gives an overview of the capabilities and specifications of the netHAT.
|
||||
|
||||
\begin{table}[H]
|
||||
\centering
|
||||
\begin{tabularx}{0.75\textwidth}{ | c | >{\centering\arraybackslash}X | } \hline
|
||||
@ -254,9 +291,13 @@ Interface & SPI up to 125MHz \\ \hline
|
||||
Network & 2x Ethernet 100 BASE-TX \\ \hline
|
||||
\end{tabularx}
|
||||
\caption{The relevant technical specifications of the Hilscher netHat\cite{nethatHilscher}.}
|
||||
\label{table:nethat}
|
||||
\end{table}
|
||||
|
||||
\textbf{Cost Breakdown}\nopagebreak
|
||||
|
||||
\autoref{table:cost} lists the individual costs of each of the components, and their total. This total is not reflective of the final cost of the completed product as it does not yet include costing for the housing, wiring and other installation costs.
|
||||
|
||||
\begin{table}[H]
|
||||
\centering
|
||||
\begin{tabular}{| c | c |}
|
||||
@ -271,13 +312,6 @@ netHAT & \euro{69} \\ \hline
|
||||
\end{table}
|
||||
|
||||
\section{Process Overview}
|
||||
\begin{figure}[h]
|
||||
\centering
|
||||
\includegraphics[width=0.8\textwidth]{./design/ProcessOverview}
|
||||
\caption{Overview of the communication and processing process between the remote controller and the local processor.}
|
||||
\label{fig:processoverview}
|
||||
\end{figure}
|
||||
|
||||
With the objective of creating a marketable commercial product in mind, the process flow was designed for ease-of-use and ease-of-configuration for the end-user. This is the justification for implementing a remote controller that allows the setup to be remotely configured once installed.
|
||||
|
||||
While \autoref{fig:processoverview} gives a brief overview of the interrelationship of the remote and local sides in the complete process, it is here further elaborated:
|
||||
@ -298,33 +332,35 @@ While \autoref{fig:processoverview} gives a brief overview of the interrelations
|
||||
|
||||
\end{enumerate}
|
||||
|
||||
\section{Software Architecture}\label{sec:softarch}
|
||||
\begin{figure}[h]
|
||||
\centering
|
||||
\includegraphics[width=0.7\textwidth]{./design/SoftwareArchitecture}
|
||||
\caption{Overview of the interactions between the various software components and their communication.}
|
||||
\includegraphics[width=0.8\textwidth]{./design/ProcessOverview}
|
||||
\caption{Overview of the communication and processing process between the remote controller and the local processor.}
|
||||
\label{fig:processoverview}
|
||||
\end{figure}
|
||||
|
||||
The software developed in this project consists of two separate but tightly interconnected parts, namely:
|
||||
\section{Software Architecture}\label{sec:softarch}
|
||||
|
||||
The software architecture developed in this project---see \autoref{fig:SoftwareArchitecture}---consists of two separate but tightly interconnected parts, namely:
|
||||
\begin{itemize}
|
||||
\item \textbf{FlowPi:} The local processing software that runs on the Raspberry Pi, and
|
||||
\item \textbf{FlowRemote: } The remote control software that is meant to run on an external PC for configuration purposes
|
||||
\end{itemize}
|
||||
|
||||
\subsection{Development Language Choice}
|
||||
The software is written in \cpp\ for compatibility and performance reasons. All the device drivers provide libraries in either \clang\ or \cpp, while some drivers such as the CIFX library for the netHAT are only provided in \clang.
|
||||
|
||||
The topic of performance between languages and systems is one of much-heated debate, however \cpp\ was chosen for this project due to the ability to program comfortably in a higher-level language, while having the ability to \textit{\enquote{drop down}} into \clang. The \clang\ Programming Language is often the benchmark for higher-level programming languages when programming for Real-Time Systems due to its predictability and the ability to run operations with few layers of abstraction on memory directly\cite{pizlo2010}.
|
||||
|
||||
Furthermore, since the scale of the processing unit of the program is relatively small, the benefits that come from using a higher-level programming language---such as increased productivity, organization, and re-usability\cite{pizlo2010}---are not strictly necessary.
|
||||
|
||||
\begin{figure}[h]
|
||||
\centering
|
||||
\includegraphics[width=0.8\textwidth]{./design/ProcessingLibrary}
|
||||
\caption{Representation of the how the processing unit is called by different components of the program.}
|
||||
\label{fig:processinglib}
|
||||
\includegraphics[width=0.7\textwidth]{./design/SoftwareArchitecture}
|
||||
\caption{Overview of the interactions between the various software components and their communication.}
|
||||
\label{fig:SoftwareArchitecture}
|
||||
\end{figure}
|
||||
|
||||
\subsection{Development Language Choice}
|
||||
The software is written in \cpp\ for compatibility and performance reasons. All the device drivers provide libraries in either \clang\ or \cpp, while some drivers such as the library for the netHAT---called CIFX---are only provided in \clang.
|
||||
|
||||
The topic of performance between languages and systems is one of heated debate, however \cpp\ was chosen for this project due to the ability to program comfortably in a higher-level language, while having the ability to \textit{\enquote{drop down}} into \clang. The \clang\ Programming Language is often the benchmark for higher-level programming languages when programming for Real-Time Systems due to its predictability and the ability to run operations with few layers of abstraction on memory directly\cite{pizlo2010}.
|
||||
|
||||
Furthermore, since the scale of the processing unit of the program is relatively small, the benefits that come from using a higher-level programming language---such as increased productivity, organization, and re-usability\cite{pizlo2010}---are not strictly necessary.
|
||||
|
||||
As shown in \autoref{fig:processinglib}, the main functionality of the processing unit includes:
|
||||
|
||||
\begin{itemize}
|
||||
@ -333,10 +369,21 @@ As shown in \autoref{fig:processinglib}, the main functionality of the processin
|
||||
\item Cross-Correlation
|
||||
\end{itemize}
|
||||
|
||||
\begin{figure}[h]
|
||||
\centering
|
||||
\includegraphics[width=0.8\textwidth]{./design/ProcessingLibrary}
|
||||
\caption{Representation of the how the processing unit is called by different components of the program.}
|
||||
\label{fig:processinglib}
|
||||
\end{figure}
|
||||
|
||||
These are implemented in \clang\ as much as possible. This is then encapsulated by a \cpp\ wrapper. This provides ease-of-use on the remote side, where processing is not real-time critical, while still allowing the local side to directly call the \clang\ processing functions.
|
||||
|
||||
\subsection{FlowRemote -- Remote Control GUI}
|
||||
|
||||
FlowRemote is designed in order to allow for easier configuration and calibration of the setup, as well as enabling the engineer to do so remotely. The idea being that---once the Raspberry Pi and LIDAR sensor have been installed over a conveyor system and a network connection---the engineer no longer requires a direct physical connection to the Raspberry Pi in order to configure and calibrate the system. \autoref{fig:flowremotegui} shows the design of the GUI.
|
||||
|
||||
As described in \autoref{fig:flowremote}, FlowRemote allows the engineer to remotely preview the raw sensor data, run pre-processing on it, configure the processing parameters and deliver those back to the local processor running on the Raspberry Pi.
|
||||
|
||||
\begin{figure}[h]
|
||||
\centering
|
||||
\includegraphics[width=0.9\textwidth]{./design/FlowRemote}
|
||||
@ -344,31 +391,22 @@ These are implemented in \clang\ as much as possible. This is then encapsulated
|
||||
\label{fig:flowremote}
|
||||
\end{figure}
|
||||
|
||||
\begin{figure}[h]
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=\textwidth]{./flowremote}
|
||||
\caption{Design of the FlowRemote GUI.}
|
||||
\label{fig:flowremotegui}
|
||||
\end{figure}
|
||||
|
||||
FlowRemote is designed in order to allow easier configuration and calibration of the setup, as well as enabling the engineer to do so remotely. The idea being that---once the Raspberry Pi and LIDAR sensor have been installed over a conveyor system and a network connection---the engineer no longer requires a direct physical connection to the Raspberry Pi in order to configure and calibrate the system.
|
||||
|
||||
As described in \autoref{fig:flowremote}, FlowRemote allows the engineer to remotely preview the raw sensor data, run pre-processing on it, configure the processing parameters and deliver those back to the local processor running on the Raspberry Pi.
|
||||
|
||||
|
||||
%\section{Data Processing and Outputs}\label{sec:dataproc}
|
||||
%rotation algorithm, skew algorithm, curve fitting library, cross-correlation algorithm...
|
||||
%
|
||||
%transmission to Profinet, output formats (float)\todo{complete section}
|
||||
|
||||
|
||||
\section{Housing}
|
||||
\begin{figure}[H]
|
||||
For the purposes of field-testing the project, a rudimentary housing was designed in CAD---see \autoref{fig:housing}---and 3D printed. The housing provided a small amount of protection from the environment for the otherwise bare Raspberry Pi.
|
||||
|
||||
\begin{figure}[h]
|
||||
\centering
|
||||
\includegraphics[width=\textwidth]{housing}
|
||||
\caption{Isometric view of the underside (left) and topside (right) of the prototype housing.}
|
||||
\label{fig:housing}
|
||||
\end{figure}
|
||||
For the purposes of field-testing the project, a rudimentary housing was designed in CAD and 3D printed. The housing provided a small amount of protection from the environment for the otherwise bare Raspberry Pi.
|
||||
|
||||
The housing was constructed around the standard Raspberry Pi 4 Model B with the netHAT modules attached, allowing for the extra ports to be accessible through the housing as well.
|
||||
|
||||
|
Reference in New Issue
Block a user