A line is described by a simple equation that calculates Y from X, slope and intercept. The purpose of linear regression is to find values for the slope and intercept that define the line that comes closest to the data.
Nonlinear regression is more general than linear regression and can fit any model (equation) to your data. It finds the values of those parameters that generate the curve that comes closest to the data.
Both linear and nonlinear regression find the values of the parameters (slope and intercept for linear regression) that make the line or curve come as close as possible to the data. More precisely, the goal is to minimize the sum of the squares of the vertical distances of the points from the line or curve.
Linear regression accomplishes this goal using math that can be completely explained with simple algebra (shown in many statistics books). Put the data in, and the answers come out. There is no chance for ambiguity. You could even do the calculations by hand, if you wanted to.
Nonlinear regression uses a computationally intensive, iterative approach that can only be explained using calculus and matrix algebra. The method requires initial estimated values for each parameter.
Nonlinear regression programs can fit any model, including a linear one. Linear regression is just a special case of nonlinear regression.
Even if your goal is to fit a straight line through your data, there are many situations where it makes sense to choose nonlinear regression rather than linear regression.
Using nonlinear regression to analyze data is only slightly more difficult than using linear regression. Your choice of linear or nonlinear regression should be based on the model you are fitting. Do not use linear regression just to avoid using nonlinear regression. Avoid transformations such as Scatchard or Lineweaver-Burke transforms whose only goal is to linearize your data.