"Via WIFI" WebDAV server not working

Currently in public beta: Windows Phone version

Moderators: white, sheep, Hacker, Stefan2

Post Reply
HAL 9000
Senior Member
Senior Member
Posts: 384
Joined: 2007-09-10, 13:05 UTC

"Via WIFI" WebDAV server not working

Post by *HAL 9000 » 2017-08-13, 19:27 UTC

OS build: 10.0.15063.540 (windows 10)

When I click "via wifi" total commander gives me a url like: "http://192.168.1.123:8081/9384". This works fine in a web browser, but with the linux cadaver webdav client I see:

Code: Select all

OPTIONS /9384/ HTTP/1.1
User-Agent: cadaver/0.23.3 neon/0.30.1
Keep-Alive: 
Connection: TE, Keep-Alive
TE: trailers
Host: 192.168.1.123:8081

HTTP/1.0 501 Not Implemented
Connection: close
Content-Type: text/html

<html><head><title>501 Not Implemented</title>
</head><body><h2>501 Not Implemented</h2></body></html>
What I'm trying to do is recursively upload a file tree from computer->phone without running a samba server on the computer. Correct me if I'm wrong, but the phone's WebDAV server *should* allow this, in conjunction with cadaver or nautilus/gvfsd-dav or wdfs. The web-interface only allows single-file upload.

User avatar
ghisler(Author)
Site Admin
Site Admin
Posts: 36425
Joined: 2003-02-04, 09:46 UTC
Location: Switzerland
Contact:

Post by *ghisler(Author) » 2017-08-14, 14:11 UTC

My (very simple self written) WebDAV server doesn't currently support the OPTIONS command. What options do you try to set?
Author of Total Commander
http://www.ghisler.com

HAL 9000
Senior Member
Senior Member
Posts: 384
Joined: 2007-09-10, 13:05 UTC

Post by *HAL 9000 » 2017-08-14, 22:47 UTC

I've tried:

* cadaver ...
* cadaver --tolerant ...
* mount -t davfs ...
* wdfs ...
* gvfs-mount ...

I don't think gvfsd-dav, wdfs or davfs2 have any options to avoid sending OPTION. Though davfs2.conf supports ignore_dav_header / add_header: could these help?

* https://linux.die.net/man/5/davfs2.conf

HAL 9000
Senior Member
Senior Member
Posts: 384
Joined: 2007-09-10, 13:05 UTC

Post by *HAL 9000 » 2017-08-14, 23:19 UTC

The specific cadaver error message is:

Code: Select all

Could not open collection:
Could not read response body: Connection reset by peer
Cadaver, wdfs and davfs2 all use libneon, and this error happens when reading from the socket returns <0, in `read_response_block`, caused by totalcommander sending a RST packet rather than FIN/ACK (is it possible you are missing a shutdown()?). Cadaver's `--tolerant` options is handled afterwards, which explains why it has no effect. The cadaver codepath is `ne_options` from neon, which determines server capabilities using OPTIONS.

gvfsd-dav uses its own gvfsbackendhttp.c library.

User avatar
ghisler(Author)
Site Admin
Site Admin
Posts: 36425
Joined: 2003-02-04, 09:46 UTC
Location: Switzerland
Contact:

Post by *ghisler(Author) » 2017-08-17, 08:31 UTC

Reading from socket returning <0 means that the conmnection is dead. TC doesn't just drop the connection, so the server must have dropped it. Do you see any errors in the server log?
Author of Total Commander
http://www.ghisler.com

HAL 9000
Senior Member
Senior Member
Posts: 384
Joined: 2007-09-10, 13:05 UTC

Post by *HAL 9000 » 2017-08-17, 17:59 UTC

Wer're on the same page? ... TC for windows phone does provide a WebDAV server? That's the server I'm using:
My (very simple self written) WebDAV server doesn't currently support the OPTIONS command
This is exactly what happens when I use the cadavar client to connect to the TotalCommander WebDAV server on windows phone:
this error happens when reading from the socket returns <0
I also have the wireshark logs to confirm that TC sent RST rather than FIN/ACK.

User avatar
ghisler(Author)
Site Admin
Site Admin
Posts: 36425
Joined: 2003-02-04, 09:46 UTC
Location: Switzerland
Contact:

Post by *ghisler(Author) » 2017-08-21, 13:15 UTC

TC for windows phone does provide a WebDAV server?
Yes, "Send via WiFi" runs a local WebDAV server. Have a look at the thread title!

Apparently this "cadavar" client is sending this OPTIONS command, which the server doesn't understand. Therefore my server reports error 501 Not Implemented. The client should be able to handle that, and continue the operation without OPTIONS command.
Author of Total Commander
http://www.ghisler.com

HAL 9000
Senior Member
Senior Member
Posts: 384
Joined: 2007-09-10, 13:05 UTC

Post by *HAL 9000 » 2017-08-22, 14:40 UTC

TC does send a 501 message, but then abruptly closes the socket by sending RST. So the client (in this case cadaver but also every client available on linux), in the read loop, reads the 501, tries to read more from the now-closed socket and gets a -1 return code (rather than 0 if FIN/ACK had been sent), which causes the client to exit. So the client never gets the opportunity to process/handle the 501 message.

User avatar
ghisler(Author)
Site Admin
Site Admin
Posts: 36425
Joined: 2003-02-04, 09:46 UTC
Location: Switzerland
Contact:

Post by *ghisler(Author) » 2017-08-24, 09:50 UTC

TC closes the connection gracefully - and it includes a header "Connection: close" with all replies, since it doesn't currently support multiple commands in one connection.
Author of Total Commander
http://www.ghisler.com

HAL 9000
Senior Member
Senior Member
Posts: 384
Joined: 2007-09-10, 13:05 UTC

Post by *HAL 9000 » 2017-08-26, 16:57 UTC

I don't have any other explanation for this:

Code: Select all

$ cadaver --tolerant http://192.168.1.155:8081/9384
Could not open collection:
Could not read response body: Connection reset by peer
dav:/9384/? 
cadaver: lib/neon/ne_request.c:

Code: Select all

/* Reads a block of the response into BUFFER, which is of size
 * *BUFLEN.  Returns zero on success or non-zero on error.  On
 * success, *BUFLEN is updated to be the number of bytes read into
 * BUFFER (which will be 0 to indicate the end of the repsonse).  On
 * error, the connection is closed and the session error string is
 * set.  */
static int read_response_block(ne_request *req, struct ne_response *resp, 
                               char *buffer, size_t *buflen) 
{
    ne_socket *const sock = req->session->socket;
    size_t willread;
    ssize_t readlen;
    
    switch (resp->mode) {
    case R_CHUNKED:
        /* Chunked transfer-encoding: chunk syntax is "SIZE CRLF CHUNK
         * CRLF SIZE CRLF CHUNK CRLF ..." followed by zero-length
         * chunk: "CHUNK CRLF 0 CRLF".  resp.chunk.remain contains the
         * number of bytes left to read in the current chunk. */
        if (resp->body.chunk.remain == 0) {
            unsigned long chunk_len;
            char *ptr;

            /* Read the chunk size line into a temporary buffer. */
            SOCK_ERR(req,
                     ne_sock_readline(sock, req->respbuf, sizeof req->respbuf),
                     _("Could not read chunk size"));
            NE_DEBUG(NE_DBG_HTTP, "[chunk] < %s", req->respbuf);
            chunk_len = strtoul(req->respbuf, &ptr, 16);
            /* limit chunk size to <= UINT_MAX, so it will probably
             * fit in a size_t. */
            if (ptr == req->respbuf || 
                chunk_len == ULONG_MAX || chunk_len > UINT_MAX) {
                return aborted(req, _("Could not parse chunk size"), 0);
            }
            NE_DEBUG(NE_DBG_HTTP, "Got chunk size: %lu\n", chunk_len);
            resp->body.chunk.remain = chunk_len;
        }
        willread = resp->body.chunk.remain > *buflen
            ? *buflen : resp->body.chunk.remain;
        break;
    case R_CLENGTH:
        willread = resp->body.clen.remain > (off_t)*buflen 
            ? *buflen : (size_t)resp->body.clen.remain;
        break;
    case R_TILLEOF:
        willread = *buflen;
        break;
    case R_NO_BODY:
    default:
        willread = 0;
        break;
    }
    if (willread == 0) {
        *buflen = 0;
        return 0;
    }
    NE_DEBUG(NE_DBG_HTTP,
             "Reading %" NE_FMT_SIZE_T " bytes of response body.\n", willread);
    readlen = ne_sock_read(sock, buffer, willread);

    /* EOF is only valid when response body is delimited by it.
     * Strictly, an SSL truncation should not be treated as an EOF in
     * any case, but SSL servers are just too buggy.  */
    if (resp->mode == R_TILLEOF && 
        (readlen == NE_SOCK_CLOSED || readlen == NE_SOCK_TRUNC)) {
        NE_DEBUG(NE_DBG_HTTP, "Got EOF.\n");
        req->can_persist = 0;
        readlen = 0;
    } else if (readlen < 0) {
        return aborted(req, _("Could not read response body"), readlen);
    } else {
        NE_DEBUG(NE_DBG_HTTP, "Got %" NE_FMT_SSIZE_T " bytes.\n", readlen);
    }
    /* safe to cast: readlen guaranteed to be >= 0 above */
    *buflen = (size_t)readlen;
    NE_DEBUG(NE_DBG_HTTPBODY,
             "Read block (%" NE_FMT_SSIZE_T " bytes):\n[%.*s]\n",
             readlen, (int)readlen, buffer);
    if (resp->mode == R_CHUNKED) {
        resp->body.chunk.remain -= readlen;
        if (resp->body.chunk.remain == 0) {
            char crlfbuf[2];
            /* If we've read a whole chunk, read a CRLF */
            readlen = ne_sock_fullread(sock, crlfbuf, 2);
            if (readlen < 0)
                return aborted(req, _("Could not read chunk delimiter"),
                               readlen);
            else if (crlfbuf[0] != '\r' || crlfbuf[1] != '\n')
                return aborted(req, _("Chunk delimiter was invalid"), 0);
        }
    } else if (resp->mode == R_CLENGTH) {
        resp->body.clen.remain -= readlen;
    }
    resp->progress += readlen;
    return NE_OK;
}
Screenshot of wireshark trace showing TC sending RST, as well as the extracted TCP stream:

Image: https://s27.postimg.org/52z29939t/tc-cadaver-wireshark.png

HAL 9000
Senior Member
Senior Member
Posts: 384
Joined: 2007-09-10, 13:05 UTC

Post by *HAL 9000 » 2017-09-02, 01:25 UTC

Bump; Any ideas why this happens? Have you seen the wireshark trace screenshot? The error must be from TC...

User avatar
ghisler(Author)
Site Admin
Site Admin
Posts: 36425
Joined: 2003-02-04, 09:46 UTC
Location: Switzerland
Contact:

Post by *ghisler(Author) » 2017-09-04, 12:55 UTC

It's because cadaver sends OPTIONS and can't handle 501 error that it's not supported.

I could try to implement OPTIONS and return something, but I have no idea what cadaver expects to receive.

What do you see in the log when connecting to other WebDAV servers with cadaver?
Author of Total Commander
http://www.ghisler.com

HAL 9000
Senior Member
Senior Member
Posts: 384
Joined: 2007-09-10, 13:05 UTC

Post by *HAL 9000 » 2017-09-13, 21:19 UTC

You're right; cadaver *does* require OPTIONS to function.

More details:

The cadaver client initializes the connection by calling `open_connection` (the is the cadaver client) -> `ne_options` (just a wrapper) -> `ne_options2` -> `ne_request_dispatch` -> `ne_discard_response` -> `ne_read_response_block` -> `read_response_block`. The last function `read_response_block` behaves exactly like 'man read(3)': returns <0 in error or reads a chunk of data, possibly less than requested. The function `ne_discard_response` loops until all the data is read and returns NE_OK, or NE_ERROR if there was an error in `read_response_block`. This status is bubbles back to `ne_options2` which re-interprets NE_OK with a non-200 class HTTP response code as NE_ERROR. The cadaver client in `open_connection` requires NE_OK (i.e. HTTP 200) in order to continue, so OPTIONS must be supported (`ne_options` only creates an OPTIONS request). The `--tolerant` option is only applied by cadaver (via the `set_path` function) upon receiving NE_OK.

While cadaver won't work with any servers not supporting OPTIONS, Total Commander closes the connection by sending RST rather than FIN/ACK. Even though the end-result is the same, it shouldn't happen:

Regular WebDav server without OPTIONS:
Total Commander:
Out-of-the box, the nginx webdav module does not support either PROPFIND or OPTIONS. I used this for my tests. Configuration below:

Code: Select all

daemon off;
worker_processes auto;
error_log nginx.logs/error.log;

pid nginx.temp/nginx.pid;

# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

load_module <path-to:ngx_http_dav_ext_module.so>;

events {
    worker_connections 1024;
}

http {
    log_format  main        '$remote_addr - $remote_user [$time_local] "$request" '
                            '$status $body_bytes_sent "$http_referer" '
                            '"$http_user_agent" "$http_x_forwarded_for"';

    access_log              nginx.logs/access.log  main;

    client_body_temp_path   nginx.temp/client_body;
    proxy_temp_path         nginx.temp/proxy;
    fastcgi_temp_path       nginx.temp/fastcgi;
    uwsgi_temp_path         nginx.temp/uwsgi;
    scgi_temp_path          nginx.temp/scgi;


    sendfile                on;
    tcp_nopush              on;
    tcp_nodelay             on;
    keepalive_timeout       65;
    types_hash_max_size     4096;

    include                 /etc/nginx/mime.types;
    default_type            application/octet-stream;

    # Load modular configuration files from the /etc/nginx/conf.d directory.
    # See http://nginx.org/en/docs/ngx_core_module.html#include
    # for more information.
    include /etc/nginx/conf.d/*.conf;

    server {
        listen              8080 default_server;
        listen              [::]:8080 default_server;
        server_name         _;
        root                /usr/share/nginx/html;

        # Load configuration files for the default server block.
        include             /etc/nginx/default.d/*.conf;

        error_page 404 /404.html;
            location = /40x.html {
        }

        error_page 500 502 503 504 /50x.html;
            location = /50x.html {
        }

        location /dav {
            alias <path-to:webdav-root>;
        
            dav_methods PUT DELETE MKCOL COPY MOVE;
            dav_ext_methods PROPFIND OPTIONS;
            dav_access user:rw group:rw all:rw;
            create_full_put_path on;

            # non-dav module options
            client_max_body_size 0;
            autoindex on;
        
            allow 192.168.1.0/24;
            allow 127.0.0.1;
            deny all;
        }
    }
}
I then added PROPFIND and OPTIONS support to webdav via the nginx-dav-ext-module module [1]. The case for `NGX_HTTP_OPTIONS` seems relatively simple, so it should be a good guide to implementing OPTIONS in Total Commander. Unfortunately this nginx module requires both OPTIONS and PROPFIND; only enabling OPTIONS in 'nginx.conf' via `dav_ext_methods OPTIONS;` breaks the WebDav implementation. The server claims to support PROPFIND (even though it is disabled), so cadaver receives an HTTP 405 when it sends a PROPFIND request, leading to this message:

Code: Select all

Ignored error: /dav/source/ not WebDAV-enabled:
405 Not Allowed
In any case I've attached the wireshark trace and cadaver log from cadaver interacting with a server supporting OPTIONS and PROPFIND:
  • Cadaver response:

    Code: Select all

    $ cadaver -t http://localhost:8080/dav/source
    dav:/dav/source/> ls
    Listing collection `/dav/source/': succeeded.
    ...
    dav:/dav/source/> quit
    Connection to `localhost' closed.
  • NGINX logs:

    Code: Select all

    $ cat nginx.logs/access.log
    127.0.0.1 - - [13/Sep/2017:16:16:51 -0400] "OPTIONS /dav/source/ HTTP/1.1" 200 0 "-" "cadaver/0.23.3 neon/0.30.1" "-"
    127.0.0.1 - - [13/Sep/2017:16:16:51 -0400] "PROPFIND /dav/source/ HTTP/1.1" 207 608 "-" "cadaver/0.23.3 neon/0.30.1" "-"
    127.0.0.1 - - [13/Sep/2017:16:16:53 -0400] "PROPFIND /dav/source/ HTTP/1.1" 207 3752 "-" "cadaver/0.23.3 neon/0.30.1" "-"
  • Screenshot of wireshark trace showing regular WebDav server supporing OPTIONS+PROPFIND, as well as the extracted TCP stream:
    Image: https://s26.postimg.org/cqevw8u87/cadaver-nginx.OPTIONS.PROPFIND-wireshark.png
  • This is the wireshark trace. The server responses are indented and the final server response listing each directory entry is truncated because it is very long.

    Code: Select all

    OPTIONS /dav/source/ HTTP/1.1
    User-Agent: cadaver/0.23.3 neon/0.30.1
    Keep-Alive: 
    Connection: TE, Keep-Alive
    TE: trailers
    Host: localhost:8080
    
        HTTP/1.1 200 OK
        Server: nginx/1.13.5
        Date: Wed, 13 Sep 2017 20:18:31 GMT
        Content-Length: 0
        Connection: keep-alive
        DAV: 1
        Allow: GET,HEAD,PUT,DELETE,MKCOL,COPY,MOVE,PROPFIND,OPTIONS
    
    PROPFIND /dav/source/ HTTP/1.1
    User-Agent: cadaver/0.23.3 neon/0.30.1
    Connection: TE
    TE: trailers
    Host: localhost:8080
    Depth: 0
    Content-Length: 288
    Content-Type: application/xml
    
    <?xml version="1.0" encoding="utf-8"?>
    <propfind xmlns="DAV:"><prop>
    <getcontentlength xmlns="DAV:"/>
    <getlastmodified xmlns="DAV:"/>
    <executable xmlns="http://apache.org/dav/props/"/>
    <resourcetype xmlns="DAV:"/>
    <checked-in xmlns="DAV:"/>
    <checked-out xmlns="DAV:"/>
    </prop></propfind>
    
        HTTP/1.1 207 Multi-Status
        Server: nginx/1.13.5
        Date: Wed, 13 Sep 2017 20:18:31 GMT
        Transfer-Encoding: chunked
        Connection: keep-alive
    
        47
        <?xml version="1.0" encoding="utf-8" ?>
        <D:multistatus xmlns:D="DAV:">
    
        1f0
        <D:response>
        <D:href>/dav/source/</D:href>
        <D:propstat>
        <D:prop>
        <D:creationdate>2017-09-13T19:54:27Z</D:creationdate>
        <D:displayname></D:displayname>
        <D:getcontentlanguage/>
        <D:getcontentlength>182</D:getcontentlength>
        <D:getcontenttype/>
        <D:getetag/>
        <D:getlastmodified>Wed, 13 Sep 2017 19:54:27 GMT</D:getlastmodified>
        <D:lockdiscovery/>
        <D:resourcetype><D:collection/></D:resourcetype>
        <D:source/>
        <D:supportedlock/>
        </D:prop>
        <D:status>HTTP/1.1 200 OK</D:status>
        </D:propstat>
        </D:response>
    
        11
        </D:multistatus>
    
        0
    
    PROPFIND /dav/source/ HTTP/1.1
    User-Agent: cadaver/0.23.3 neon/0.30.1
    Connection: TE
    TE: trailers
    Host: localhost:8080
    Depth: 1
    Content-Length: 288
    Content-Type: application/xml
    
    <?xml version="1.0" encoding="utf-8"?>
    <propfind xmlns="DAV:"><prop>
    <getcontentlength xmlns="DAV:"/>
    <getlastmodified xmlns="DAV:"/>
    <executable xmlns="http://apache.org/dav/props/"/>
    <resourcetype xmlns="DAV:"/>
    <checked-in xmlns="DAV:"/>
    <checked-out xmlns="DAV:"/>
    </prop></propfind>
    
        HTTP/1.1 207 Multi-Status
        Server: nginx/1.13.5
        Date: Wed, 13 Sep 2017 20:18:33 GMT
        Transfer-Encoding: chunked
        Connection: keep-alive
    
        47
        <?xml version="1.0" encoding="utf-8" ?>
        <D:multistatus xmlns:D="DAV:">
    
        1f0
        <D:response>
        <D:href>/dav/source/</D:href>
        <D:propstat>
        <D:prop>
        <D:creationdate>2017-09-13T19:54:27Z</D:creationdate>
        <D:displayname></D:displayname>
        <D:getcontentlanguage/>
        <D:getcontentlength>182</D:getcontentlength>
        <D:getcontenttype/>
        <D:getetag/>
        <D:getlastmodified>Wed, 13 Sep 2017 19:54:27 GMT</D:getlastmodified>
        <D:lockdiscovery/>
        <D:resourcetype><D:collection/></D:resourcetype>
        <D:source/>
        <D:supportedlock/>
        </D:prop>
        <D:status>HTTP/1.1 200 OK</D:status>
        </D:propstat>
        </D:response>
    
        ...
        ... (
        ...  truncated: it lists all the entries, which is really long and not
        ...  really on-topic.
        ... )
        ...
    

1. https://github.com/arut/nginx-dav-ext-module

User avatar
ghisler(Author)
Site Admin
Site Admin
Posts: 36425
Joined: 2003-02-04, 09:46 UTC
Location: Switzerland
Contact:

Post by *ghisler(Author) » 2017-09-15, 14:07 UTC

Thanks for the detailed log of the connection. I will try to support OPTIONS.
Author of Total Commander
http://www.ghisler.com

Post Reply