kopia lustrzana https://gitlab.com/sane-project/website
380 wiersze
13 KiB
HTML
380 wiersze
13 KiB
HTML
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"
|
|
"http://www.w3.org/TR/REC-html40/loose.dtd">
|
|
<HTML>
|
|
<HEAD>
|
|
<TITLE>sane-devel: Re: scsi command queuing</TITLE>
|
|
<META NAME="Author" CONTENT="abel deuring (a.deuring@satzbau-gmbh.de)">
|
|
<META NAME="Subject" CONTENT="Re: scsi command queuing">
|
|
</HEAD>
|
|
<BODY BGCOLOR="#FFFFFF" TEXT="#000000">
|
|
<H1>Re: scsi command queuing</H1>
|
|
<!-- received="Thu Jun 29 05:10:34 2000" -->
|
|
<!-- isoreceived="20000629121034" -->
|
|
<!-- sent="Thu, 29 Jun 2000 14:14:25 +0200" -->
|
|
<!-- isosent="20000629121425" -->
|
|
<!-- name="abel deuring" -->
|
|
<!-- email="a.deuring@satzbau-gmbh.de" -->
|
|
<!-- subject="Re: scsi command queuing" -->
|
|
<!-- id="395B3DA1.B771658E@satzbau-gmbh.de" -->
|
|
<!-- inreplyto="395A191F.E569B9AE@wolfsburg.de" -->
|
|
<STRONG>From:</STRONG> abel deuring (<A HREF="mailto:a.deuring@satzbau-gmbh.de?Subject=Re:%20scsi%20command%20queuing&In-Reply-To=<395B3DA1.B771658E@satzbau-gmbh.de>"><EM>a.deuring@satzbau-gmbh.de</EM></A>)<BR>
|
|
<STRONG>Date:</STRONG> Thu Jun 29 2000 - 05:14:25 PDT
|
|
<P>
|
|
<!-- next="start" -->
|
|
<LI><STRONG>Next message:</STRONG> <A HREF="0213.html">Nathan Stenzel: "Re: Test backends with 'scanimage -T'"</A>
|
|
<UL>
|
|
<LI><STRONG>Previous message:</STRONG> <A HREF="0211.html">Benjamin Low: "SANE 1.0.2 DLL problems"</A>
|
|
<LI><STRONG>In reply to:</STRONG> <A HREF="0201.html">Oliver Rauch: "scsi command queuing"</A>
|
|
<!-- nextthread="start" -->
|
|
<LI><STRONG>Next in thread:</STRONG> <A HREF="0216.html">Oliver Rauch: "Re: scsi command queuing"</A>
|
|
<LI><STRONG>Reply:</STRONG> <A HREF="0216.html">Oliver Rauch: "Re: scsi command queuing"</A>
|
|
<LI><STRONG>Reply:</STRONG> <A HREF="0233.html">Henning Meier-Geinitz: "Re: scsi command queuing"</A>
|
|
<!-- reply="end" -->
|
|
<LI><STRONG>Messages sorted by:</STRONG>
|
|
<A HREF="date.html#212">[ date ]</A>
|
|
<A HREF="index.html#212">[ thread ]</A>
|
|
<A HREF="subject.html#212">[ subject ]</A>
|
|
<A HREF="author.html#212">[ author ]</A>
|
|
</UL>
|
|
<HR NOSHADE><P>
|
|
<!-- body="start" -->
|
|
<P>
|
|
Oliver Rauch wrote:
|
|
<BR>
|
|
<EM>>
|
|
</EM><BR>
|
|
<EM>> Hi,
|
|
</EM><BR>
|
|
<EM>>
|
|
</EM><BR>
|
|
<EM>> has someone experience with a sane backend and scsi command queueing?
|
|
</EM><BR>
|
|
<EM>>
|
|
</EM><BR>
|
|
<EM>> I am just working on it for the umax backend.
|
|
</EM><BR>
|
|
<EM>>
|
|
</EM><BR>
|
|
<EM>> At first I created some routines that replace the pipe to transfer the
|
|
</EM><BR>
|
|
<EM>> data from the reader_process to the main process, it uses shared
|
|
</EM><BR>
|
|
<EM>> memory
|
|
</EM><BR>
|
|
<EM>> instead (on systems where shared memory is available, otherwise the
|
|
</EM><BR>
|
|
<EM>> pipe is
|
|
</EM><BR>
|
|
<EM>> used).
|
|
</EM><BR>
|
|
<EM>>
|
|
</EM><BR>
|
|
<EM>> Unfortunetly it does not speed up scanning large images. It really
|
|
</EM><BR>
|
|
<EM>> looks like
|
|
</EM><BR>
|
|
<EM>> the comunication via the scsi bus is not fast enough.
|
|
</EM><BR>
|
|
<P>As far as my experiences go, the scan speed (mainly the number of the
|
|
<BR>
|
|
scan head stops) depends on quite a number of factors:
|
|
<BR>
|
|
<P>- more or less broken scanner firmware
|
|
<BR>
|
|
- too tiny memory for the scanner's controller
|
|
<BR>
|
|
- slow responses by the host machine (backend; sanei_scsi layer; speed
|
|
<BR>
|
|
of the low level drivers)
|
|
<BR>
|
|
<P>As Douglas stated in his response to your mail, I had quite some success
|
|
<BR>
|
|
with speeding up the Sharp JX250 with command queueing. Command queueing
|
|
<BR>
|
|
combined with a buffer size of 128 kB (or for 400 dpi scans, 256 kB)
|
|
<BR>
|
|
avoids all scan head stops, at least, if the JX250 is connected to an
|
|
<BR>
|
|
Adaptec 2940 or some NCR controller (sorry, can't remember, which one;
|
|
<BR>
|
|
it needs the ncr53c8xx driver). If the JX250 is connected to an Adaptec
|
|
<BR>
|
|
1542, 5 or 10 scan head stops remain.
|
|
<BR>
|
|
<A HREF="http://www.satzbau-gmbh.de/staff/abel/jx250perf.html">http://www.satzbau-gmbh.de/staff/abel/jx250perf.html</A> show the results of
|
|
<BR>
|
|
some speed tests (not for the 1542).
|
|
<BR>
|
|
<P>On the other hand, I also tried forking and command queueing with the
|
|
<BR>
|
|
Microtek backend in order to speed up an old Microtek Scanmaker II,
|
|
<BR>
|
|
without any success. My conclusion was that the Microtek's firmware
|
|
<BR>
|
|
probably stops the scanhead after each "read data" command, instead of
|
|
<BR>
|
|
scanning a little bit further "just in case", the next read command
|
|
<BR>
|
|
might come soon. The Scanmaker III does not show this behaviour -- at
|
|
<BR>
|
|
least gray scale scans most times don't have any scan heads stop.
|
|
<BR>
|
|
<P><EM>> So I added scsi command queueing into the umax backend. But I am not
|
|
</EM><BR>
|
|
<EM>> sure how I can see
|
|
</EM><BR>
|
|
<EM>> 1) how/if it works (sanei_scsi debug output is not good enough)
|
|
</EM><BR>
|
|
<P>"If": Well, the JX250 shows that command queueing works and can have
|
|
<BR>
|
|
some influence :) Regarding "how": sanei_scsi_open checks, how many
|
|
<BR>
|
|
commands can be queued by the Linux SCSI subsystem (there is a DBG
|
|
<BR>
|
|
statement showing this number). sanei_scsi_req_enter checks, if this
|
|
<BR>
|
|
queue is full; if it isn't, it sends the command to the SG driver, else
|
|
<BR>
|
|
it queues the command internally. sanei_scsi_req_wait wait for the
|
|
<BR>
|
|
oldest queueing command to finish; if there are any commands in the
|
|
<BR>
|
|
"sanei_scsi-internel" queue which not yet sent to the SG driver, they
|
|
<BR>
|
|
are sent, until the low level queue is again full, or the internal queue
|
|
<BR>
|
|
is completely sent.
|
|
<BR>
|
|
<P><P>Henning Meier-Geinitz wrote:
|
|
<BR>
|
|
<EM>>
|
|
</EM><BR>
|
|
<EM>> Hi,
|
|
</EM><BR>
|
|
<EM>>
|
|
</EM><BR>
|
|
<EM>> On Wed, Jun 28, 2000 at 05:26:23PM +0200, Oliver Rauch wrote:
|
|
</EM><BR>
|
|
<EM>> > has someone experience with a sane backend and scsi command queueing?
|
|
</EM><BR>
|
|
<EM>>
|
|
</EM><BR>
|
|
<EM>> I have tried this some weeks ago without much success. To be exact: it is
|
|
</EM><BR>
|
|
<EM>> possible to send more than one scsi_req_enter before scsi_req_wait and there
|
|
</EM><BR>
|
|
<EM>> is no problem with this. But it isn't faster than waiting for each request
|
|
</EM><BR>
|
|
<EM>> and then sending the next. I haven't looked deeply into the code (and I
|
|
</EM><BR>
|
|
<EM>> don't understand much of SCSI) but the following lines looked suspicous:
|
|
</EM><BR>
|
|
<EM>> (in scsi_rq_wait())
|
|
</EM><BR>
|
|
<EM>> /* Now issue next command asap, if any. We can't do this
|
|
</EM><BR>
|
|
<EM>> earlier since the Linux kernel has space for just one big
|
|
</EM><BR>
|
|
<EM>> buffer. */
|
|
</EM><BR>
|
|
<EM>> issue (req->next);
|
|
</EM><BR>
|
|
<EM>>
|
|
</EM><BR>
|
|
<EM>> So if I understand this correctly the scsi_req_enter only shedules the
|
|
</EM><BR>
|
|
<EM>> request and it will be sent to the driver whenever any pending request is
|
|
</EM><BR>
|
|
<EM>> finished. Maybe the "Linux can only use one big buffer" is history with the
|
|
</EM><BR>
|
|
<EM>> newer sg drivers?
|
|
</EM><BR>
|
|
<P>Right. I forgot to remove the comment you quoted above, when I worked on
|
|
<BR>
|
|
sanei_scsi.c.
|
|
<BR>
|
|
<P><EM>> Same here. I looked at the scanning times when the backend does nothing but
|
|
</EM><BR>
|
|
<EM>> getting data from the scanner and ignoring it. There was no big change in
|
|
</EM><BR>
|
|
<EM>> scanning time (about 5 %). With the original SCSI adapter the Mustek
|
|
</EM><BR>
|
|
<EM>> scanners are about twice as slow as with Windows despite large (4 MB) SCSI
|
|
</EM><BR>
|
|
<EM>> buffers and tweaking the Linux SCSI driver.
|
|
</EM><BR>
|
|
<P>Which adapter is shipped with the Mustek? And do other adpaters work
|
|
<BR>
|
|
better?
|
|
<BR>
|
|
<P><P>Wolfgang Rapp wrote:
|
|
<BR>
|
|
<P><EM>> But I think the most bottlenek is the interaktion between backends and kernel
|
|
</EM><BR>
|
|
<EM>> after every scsi-command. This interaction time by
|
|
</EM><BR>
|
|
<EM>> system calls , kernel scheduling etc. at this time is to long to keep the
|
|
</EM><BR>
|
|
<EM>> scanner running, the next scan command block should be send
|
|
</EM><BR>
|
|
<EM>> by the driver if it receives the completion interrupt from the last.
|
|
</EM><BR>
|
|
<P>Agreed.
|
|
<BR>
|
|
<P><EM>> If we talk about scanspeed we should think about extending the sg driver for
|
|
</EM><BR>
|
|
<EM>> doublebuffering data pages in kernel memory space
|
|
</EM><BR>
|
|
<EM>> and command block repeat count.
|
|
</EM><BR>
|
|
<P>Well, I think that the sanei_scsi_req_enter / sanei_scsi_req_wait
|
|
<BR>
|
|
mechanism should give similar results as command repeating, but the
|
|
<BR>
|
|
former is more flexible, because you can also sent some status inquiries
|
|
<BR>
|
|
or whatever between to "read data" commands.
|
|
<BR>
|
|
<P><EM>> Filling one buffer by dma from scsi hardware
|
|
</EM><BR>
|
|
<EM>> and coping in parallel the other out to the user
|
|
</EM><BR>
|
|
<EM>> space instead of waiting for interrupts. My somebody have looked more to the
|
|
</EM><BR>
|
|
<EM>> linux sg driver source code then I and knows more
|
|
</EM><BR>
|
|
<EM>> about how it works. But so all backends must be changed because not all could
|
|
</EM><BR>
|
|
<EM>> be done ins sane_scsi.
|
|
</EM><BR>
|
|
<P><EM>>From the sanei_scsi viewpoint, Linux queueing is not that difficult:
|
|
</EM><BR>
|
|
Simply send as many commands as possible, and wait for the results :)
|
|
<BR>
|
|
Even if the kernel needs to store the read data in an internal buffer,
|
|
<BR>
|
|
is not an important question for sanei_scsi. For the 2.0 and 2.2
|
|
<BR>
|
|
kernels, this happens, but AFAIK the 2.4 kernels will support user space
|
|
<BR>
|
|
DMA. The write call to the SG driver, which starts a SCSI command,
|
|
<BR>
|
|
contains a pointer to the memory location, where the backend wants the
|
|
<BR>
|
|
data to be written to. How many buffers are involved inside the kernel,
|
|
<BR>
|
|
before the data is written to the user space buffer, doesn't matter...
|
|
<BR>
|
|
well, that can of course be a performance issue.
|
|
<BR>
|
|
<P>Douglas Gilbert wrote:
|
|
<BR>
|
|
<P><EM>> Abel found in a few situations there is a benefit. My theory
|
|
</EM><BR>
|
|
<EM>> is that queuing commands up against your adapter driver
|
|
</EM><BR>
|
|
<EM>> (3 layers down within the kernel) gives better latencies
|
|
</EM><BR>
|
|
<EM>> than queuing commands in the app (or just waiting for the
|
|
</EM><BR>
|
|
<EM>> previous one to finish).
|
|
</EM><BR>
|
|
<P>Agreed. But to some - quite small, but perhaps important - extent, the
|
|
<BR>
|
|
latency is also a matter of the drivers and/or hardware involved: My
|
|
<BR>
|
|
tests with the Sharp JX250 showed that my NCR adapter gives a slighty
|
|
<BR>
|
|
better performance on a Pentium 100 MHz machine than an Adaptec 2940.
|
|
<BR>
|
|
<P>Now for a different idea. (If I'm going to talk nonsense, let me know.)
|
|
<BR>
|
|
If my memory is right, it is for example possible even with the ISA card
|
|
<BR>
|
|
aha1542 to issue SCSI commands with data blocks larger than 64 kB. Since
|
|
<BR>
|
|
the DMA block size for ISA cards is limited to 64 kB, this means that
|
|
<BR>
|
|
the kernel must organize more than one DMA transfer for one SCSI
|
|
<BR>
|
|
command. At present, these data are collected in a large buffer (or with
|
|
<BR>
|
|
scatter-gather, in several buffers), and and when a SCSI fommand
|
|
<BR>
|
|
finishes, all bytes are at once tranferred to the user memory. In other
|
|
<BR>
|
|
words, the machine must have enough memory to buffer the entire data
|
|
<BR>
|
|
block (or another copies, if DMA to user space is not possible). This
|
|
<BR>
|
|
sets some limit for the reasonable data block size of a SCSI command:
|
|
<BR>
|
|
the size should of course not be larger than the phyical memory
|
|
<BR>
|
|
installed; and since Unix is a multitasking OS, one should leave enough
|
|
<BR>
|
|
memory for other processes. Using data block sizes of more than a few
|
|
<BR>
|
|
hundred kB for SCSI commands is in my opinion a bad idea even on a
|
|
<BR>
|
|
workstation with 128 MB RAM or more.
|
|
<BR>
|
|
<P>On the other hand, it might help to speed up a scan, if only one or two
|
|
<BR>
|
|
read command are issued for an entire scan. For higher resolutions and
|
|
<BR>
|
|
large scan windows, this means to read several dozens of megabytes with
|
|
<BR>
|
|
just one command.
|
|
<BR>
|
|
<P>OK, now, is there (or, could there be) a way to set up something similar
|
|
<BR>
|
|
to piping, so that the data sent from the scanner for one SCSI command
|
|
<BR>
|
|
can be read in smaller chunks by the backend?
|
|
<BR>
|
|
<P>The probem for Sane is, that an implementation of this idea is a matter
|
|
<BR>
|
|
of the kernel, so that we cannot hope to have it available for all the
|
|
<BR>
|
|
Unixes supported by Sane, but it could be implemented as an optional
|
|
<BR>
|
|
function.
|
|
<BR>
|
|
<P>Abel
|
|
<BR>
|
|
<P><PRE>
|
|
--
|
|
Source code, list archive, and docs: <A HREF="http://www.mostang.com/sane/">http://www.mostang.com/sane/</A>
|
|
To unsubscribe: echo unsubscribe sane-devel | mail <A HREF="mailto:majordomo@mostang.com?Subject=Re:%20scsi%20command%20queuing&In-Reply-To=<395B3DA1.B771658E@satzbau-gmbh.de>">majordomo@mostang.com</A>
|
|
</PRE>
|
|
<P><!-- body="end" -->
|
|
<HR NOSHADE>
|
|
<UL>
|
|
<!-- next="start" -->
|
|
<LI><STRONG>Next message:</STRONG> <A HREF="0213.html">Nathan Stenzel: "Re: Test backends with 'scanimage -T'"</A>
|
|
<LI><STRONG>Previous message:</STRONG> <A HREF="0211.html">Benjamin Low: "SANE 1.0.2 DLL problems"</A>
|
|
<LI><STRONG>In reply to:</STRONG> <A HREF="0201.html">Oliver Rauch: "scsi command queuing"</A>
|
|
<!-- nextthread="start" -->
|
|
<LI><STRONG>Next in thread:</STRONG> <A HREF="0216.html">Oliver Rauch: "Re: scsi command queuing"</A>
|
|
<LI><STRONG>Reply:</STRONG> <A HREF="0216.html">Oliver Rauch: "Re: scsi command queuing"</A>
|
|
<LI><STRONG>Reply:</STRONG> <A HREF="0233.html">Henning Meier-Geinitz: "Re: scsi command queuing"</A>
|
|
<!-- reply="end" -->
|
|
<LI><STRONG>Messages sorted by:</STRONG>
|
|
<A HREF="date.html#212">[ date ]</A>
|
|
<A HREF="index.html#212">[ thread ]</A>
|
|
<A HREF="subject.html#212">[ subject ]</A>
|
|
<A HREF="author.html#212">[ author ]</A>
|
|
</UL>
|
|
<!-- trailer="footer" -->
|
|
<HR NOSHADE>
|
|
<P>
|
|
<SMALL>
|
|
<EM>
|
|
This archive was generated by <A HREF="http://www.hypermail.org/">hypermail 2b29</A>
|
|
: <EM>Thu Jun 29 2000 - 05:13:21 PDT</EM>
|
|
</EM>
|
|
</SMALL>
|
|
</BODY>
|
|
</HTML>
|