Jump to content
IGNORED

DIY Project High Performance Audio PC with high quality wiring


Recommended Posts

  • 2 months later...

Hi there,

 

I would like to refresh my current HQ server, any suggestion for the brand or model name of the motherboard and GPU card

 

1. MATX motherboard:

- i7, i9 13900K, 14900K or etc

 

2. GPU card

- target for 4060TI with 16G memory, shorter card version

 

I am looking for it can capable to upscale from PCM to DSD512 with EC filter and sinc-MG

Feel free to give me some idea on it

 

Thx,

Louie

Link to comment
  • 4 months later...
On 8/18/2023 at 12:45 PM, StreamFidelity said:

Inspired by the discussion about USB drivers, I would like to share my recommendations for best USB driver settings.

As Nenon pointed out, the USB drivers from Thesycon are the best in terms of sound. Thesycon Systemsoftware & Consulting GmbH is a German company and was founded in June 1998. It has been dealing with USB from the beginning. As a private customer, however, you will not receive drivers from the company, because the DAC manufacturers are responsible for the commissioning and distribution themselves.

As far as I know, Taiko Audio offers an excellent service to its customers. Taiko Audio provide a DAC specific USB script: https://taikoaudio.com/taiko-2020/wp-content/uploads/2023/07/Supported-DACs-and-USB-Profile-Guide_v24.x61923.pdf

 

What the script does in detail is beyond my knowledge. However, I suspect here the optimization of the buffer size, which you can also do yourself! Provided that your DAC manufacturer provides an ASIO driver that allows these settings.

Here I come back to Thesycon, because these USB drivers provide an administration interface for the buffer sizes. 

Using the example of the T+A SDV 3100 HV I would like to explain the settings. In the picture below the default setting with a relatively high buffer is shown. The default is set by selecting the preferred buffer size. In the example it is 512 samples, which is a high value. Now you might think that a high buffer is a good thing because more data is cached. However, this way of thinking is wrong, because a high buffer slows down the processing. The following calculations show why this is the case.

 

Since for the buffer size the sample rate (here 44.1kHz) is determining, the conversion results in 688 samples for the input latency and 904 samples for the output latency. Latency means time delay and consequently the latencies are given in milliseconds (ms) in brackets. One second is 1,000 ms. At the input we have a high 15.60 ms and at the output 20.50 ms. The Safe Mode intervenes at the last value, which increases the output latency.

 

46173687gs.png

You can improve these values significantly. With the default of lowest latencies, here with a preferred buffer size of 8 samples. The result is 1.18 ms at the input and 1.50 ms at the output.

 

spacer.png


However, the CPU must be fast enough to process the smaller packets. If the CPU is too slow, drop outs will occur. In practice you have to experiment with several settings. 

What are the positive effects of lowest latencies?

1. corrupted data packets can be re-requested faster. Otherwise you have effects like crackling when data packets are lost.

2. low latencies mean less jitter! Here the relation is well explained: Jitter - NETWORK ENCYCLOPEDIA 

3. low latencies probably mean less electrical noise, because data bursts are avoided and therefore the transport of data packets is smoother.

Conclusion

With a little effort you can minimize the latencies and improve the SQ with the buffer settings. 

By the way, this is also possible with network adapters. 🙂

 

Hello everyone,

I cannot agree with this finding. In my System most of the time I have the impression that bigger buffers give me more flued more organic playback. Rooms become bigger.

If software buffers are programed correctly the data inside the buffer is addressed with address pointers. Data is not moves around or shifted around in the buffer. It’s just the pointers that count up and down. So the size of the buffer plays no role during playback.

Link to comment
1 hour ago, Xoverman said:

In my System most of the time I have the impression that bigger buffers give me more flued more organic playback. Rooms become bigger.

Thank you for your results. A forum should provide different results so that everyone can decide for themselves what sounds best in their system.

 

In fact, smaller buffer values mean a more powerful CPU, as it is under more load. 

 

And it also depends on the hardware of the interfaces. For example, a Solarflare X2522 NIC is characterised by the lowest latencies and jitter. The buffer values no longer play a major role here. 

Link to comment
On 7/26/2024 at 4:47 PM, StreamFidelity said:

Thank you for your results. A forum should provide different results so that everyone can decide for themselves what sounds best in their system.

 

In fact, smaller buffer values mean a more powerful CPU, as it is under more load. 

 

And it also depends on the hardware of the interfaces. For example, a Solarflare X2522 NIC is characterised by the lowest latencies and jitter. The buffer values no longer play a major role here. 

That's true. I guess we all also listen for different things in the music. 

Link to comment

I managed to contact a OCXO to my SOLARFLARE NIC. It helped a lot. But the SOLARFLARE NIC still doesn't play as fluid (liquid) as my intel x540 server NIC. I guess the SOLARFLARE is really optimized for ultra fast reaction, but has problems with ultra low frequency jitter (closed in band jitter). 

20240728_094017.thumb.jpg.8b79f1ba3b916737afeeff20f764a906.jpg20240728_094058.thumb.jpg.6aacc27128609867a703ca93d3996dcc.jpg

 

I had to interrupt one pcb trace between the installed clock and a via going to the other side of the PCB. 

20240728_094148.thumb.jpg.948d5e9961f990eb4a816e8bdd87e1be.jpg

 

Then I placed the OCXO with it's 3.3V 2x LT3045 PS on a concrete brick sitting on gel dampers to keep vibration from the PC case away from the OCXO. 

 

20240728_093546.thumb.jpg.8c3b4549efef46c74bc297532bd26bf4.jpg

Link to comment
37 minutes ago, Xoverman said:

I managed to contact a OCXO to my SOLARFLARE NIC.

Respect! 👍I think you're the first person to try this with a Solarflare NIC. 

 

Which NIC are you talking about? X2522, for example, gets very hot and needs to be cooled either with a fan or, better still, passively. Otherwise the chip will throttle and even shut down. 

 

Clock cables are very sensitive. They should be kept as short as possible. This could be a problem with your test setup.

 

Incidentally, high-quality clocks are installed: Stratum 3 compliant oscillator; Oscillator drift 0.37 PPM per day (c. 32 ms/day); oscillator accuracy < 4.6PPM over 20 years. Source: Time Synchronization Features • Enhanced PTP User Guide (UG1602) • Reader • AMD Technical Information Portal

Link to comment
1 hour ago, StreamFidelity said:

Respect! 👍I think you're the first person to try this with a Solarflare NIC. 

 

Which NIC are you talking about? X2522, for example, gets very hot and needs to be cooled either with a fan or, better still, passively. Otherwise the chip will throttle and even shut down. 

 

Clock cables are very sensitive. They should be kept as short as possible. This could be a problem with your test setup.

 

Incidentally, high-quality clocks are installed: Stratum 3 compliant oscillator; Oscillator drift 0.37 PPM per day (c. 32 ms/day); oscillator accuracy < 4.6PPM over 20 years. Source: Time Synchronization Features • Enhanced PTP User Guide (UG1602) • Reader • AMD Technical Information Portal

I  have the X2522. The NIC has two XO's. One 20MHz for the NIC controller and a high precision clock chip. I think the Clock Chip is the Stratum 3 compliant oscillator.  Not the controller XO. And if you look at the specs, then Stratum 3 isn't that great at all. OCXO have much lower phase noise next to the carrier (10Hz, 100Hz). 

The cable is a 50 Ohm Coax and I terminated it. It's only 15cm long. The NIC is cooled from the fans of the RTX card above it.

Link to comment
3 hours ago, StreamFidelity said:

X2522

It has two versions. Only plus version supports ultra low latency ethernet. But however the x2522 has 1PPS bracket option can let x2522 connects to GPS source with 1PPS for calibrating hardware clock. Just like other RDMA capable NICs (for example, Intel e810), the FPGA-based ethernet acceleration is good for receiving data (offloads the ring queue), so it might good for NAA.😉 @Miska how do you think if NAA supports RDMA?

Link to comment
9 hours ago, El Guapo said:

 @Miska how do you think if NAA supports RDMA?

 

It is not supported at the moment. Most of the potential low power ARM SoC NAA hardware doesn't support it. It would anyway need a control side channel and deliver to intermediate place, I doubt the overall result would improve anything.

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
4 hours ago, Miska said:

I doubt the overall result would improve anything.

In my mind the most luxury way to connect the HQP and NAA is

 

[HQP server daemon] <-- RDMA --> [NIC] <-- SyncE (timestamped + QoS) --> [NIC] <-- RDMA --> [NAA endpoint daemon]

 

I think it would improve some µs of latency and reduce jitter when streaming 10+ channels MCH DSD.

 

So far I tested Receive Side Scaling + Receive Packet Steering + Receive Flow Steering on NAA side. It did help buffering stability.

Before setting RSS+RPS+RFS on my SoC NAA - RAV bridge, Anubis' receiving buffer looked ziggy-zaggy

image.thumb.jpeg.7214f52d5c26a60066e6f8b1a7d66733.jpeg

 

After setting RSS+RPS+RFS on my SoC NAA - RAV bridge, Anubis' receiving buffer looked smoother

image.thumb.jpeg.a91b000e78a003b4dc8321de52b23bee.jpeg

 

I think smoother buffering could reduce the control intervention. Less control and less interruption are always good for audio streaming.

Link to comment
6 hours ago, El Guapo said:

In my mind the most luxury way to connect the HQP and NAA is

 

[HQP server daemon] <-- RDMA --> [NIC] <-- SyncE (timestamped + QoS) --> [NIC] <-- RDMA --> [NAA endpoint daemon]

 

I think it would improve some µs of latency and reduce jitter when streaming 10+ channels MCH DSD.

 

So far I tested Receive Side Scaling + Receive Packet Steering + Receive Flow Steering on NAA side. It did help buffering stability.

Before setting RSS+RPS+RFS on my SoC NAA - RAV bridge, Anubis' receiving buffer looked ziggy-zaggy

image.thumb.jpeg.7214f52d5c26a60066e6f8b1a7d66733.jpeg

 

After setting RSS+RPS+RFS on my SoC NAA - RAV bridge, Anubis' receiving buffer looked smoother

image.thumb.jpeg.a91b000e78a003b4dc8321de52b23bee.jpeg

 

I think smoother buffering could reduce the control intervention. Less control and less interruption are always good for audio streaming.

So far I tested Receive Side Scaling + Receive Packet Steering + Receive Flow Steering on NAA side. It did help buffering stability.

Before setting RSS+RPS+RFS on my SoC NAA - RAV bridge, Anubis' receiving buffer looked ziggy-zaggy

 

How did you do that ?

 

Link to comment
21 hours ago, El Guapo said:

It has two versions. Only plus version supports ultra low latency ethernet. But however the x2522 has 1PPS bracket option can let x2522 connects to GPS source with 1PPS for calibrating hardware clock. Just like other RDMA capable NICs (for example, Intel e810), the FPGA-based ethernet acceleration is good for receiving data (offloads the ring queue), so it might good for NAA.😉 @Miska how do you think if NAA supports RDMA?

 

I ran the SolarflareTools-v1.9.1.

And thay told me that the low Latency Bios was installed and the default Bios.
I hope that that’s all I have to do. Pleas correct me if I’m wrong. At the moment i'm using Windows 10 on that PC,

but I will switch to Server 2022 soon.

It’s so good to have a Network expert in this forum.

Link to comment
3 hours ago, Xoverman said:

How did you do that ?

I'm using 5.15 RT kernel and the textbook is here.

 

SoC's NIC sometimes level is low. For example my Up Squared Pro 7000's NIC is dual i226 -- no hash support and unable to do specific service redirection. Intel x7425e has 4 cores so the RxTx ring has only 4.

 

Before start to configure, we can verify the ring queue matrix using ethtool (my x7425e's enp1s0 is used for generic ethernet / HQP <-> NAA connections):

ethtool -x enp1s0

And my x7425e looks like this:

image.thumb.jpeg.5ebccb2ad6d79cc6d5896de07b7848f3.jpeg

Packets into NIC will use Toeplitz function to find out which Rx ring to go. 

 

Because I configured first two cores (CPU 0, 1) to run networkaudiod on my x7425e, so the traffic I plan to indirect is Rx ring 0, 1 using this CLI:

ethtool -X enp1s0 equal 2

And the result will become:

image.thumb.jpeg.a9667f3be287a0eed51caf579f248188.jpeg

Now all incoming packets will be narrowed down to two Rx rings, either 0 or 1, to help networkaudiod accessing / reading data quicker. This part is RSS. More powerful NIC has more option to configure.

 

About the RPS configuration, we can further set which CPU to handle the Rx ring 0, 1. I plan to let CPU 0, 1 which running networkaudiod to handle it (this part is using binary masking, 0x0001 = 1, 0x0010 = 2):

echo 1 > /sys/class/net/enp1s0/queues/rx-0/rps_cpus
echo 2 > /sys/class/net/enp1s0/queues/rx-1/rps_cpus

And also set the CPU 0, 1 to handle the IRQs. Using this CLI to find out the IRQs:

grep enp /proc/interrupts

image.thumb.jpeg.d60e91eecdaa3e264a55da09f2275bd8.jpeg

The IRQs for Rx0, 1 are 128 and 129, so:

echo 0 > /proc/irq/128/smp_affinity_list
echo 1 > /proc/irq/129/smp_affinity_list

Now CPUs running networkaudiod own enp1s0's IRQs and Rx rings. From network data handling perspective the latency should have some improvement.

 

Lastly we can configure RFS. We can enlarge the flow table for better hit rate for incoming data:

echo 2048 > /proc/sys/net/core/rps_sock_flow_entries

Then set the per-queue flow table for Rx ring 0, 1:

echo 1024 > /sys/class/net/enp1s0/queues/rx-0/rps_flow_cnt
echo 1024 > /sys/class/net/enp1s0/queues/rx-1/rps_flow_cnt

Then... done.😊

 

Link to comment
17 hours ago, El Guapo said:

In my mind the most luxury way to connect the HQP and NAA is

 

[HQP server daemon] <-- RDMA --> [NIC] <-- SyncE (timestamped + QoS) --> [NIC] <-- RDMA --> [NAA endpoint daemon]

 

I think it would improve some µs of latency and reduce jitter when streaming 10+ channels MCH DSD.

 

It wouldn't have any impact on the latency. I don't know what that SyncE is, but NAA connection is asynchronous. More like your usual internet streaming. Intention is that it works nicely even over not so good WiFi connections, etc. With a huge FIFO buffer.

 

17 hours ago, El Guapo said:

So far I tested Receive Side Scaling + Receive Packet Steering + Receive Flow Steering on NAA side. It did help buffering stability.

Before setting RSS+RPS+RFS on my SoC NAA - RAV bridge, Anubis' receiving buffer looked ziggy-zaggy

image.thumb.jpeg.7214f52d5c26a60066e6f8b1a7d66733.jpeg

 

Note that RAVENNA and NAA are as different as possible. RAVENNA is passing audio clocks over network, etc. While NAA is not transferring any clocks, but driven solely by the DAC's hardware clock.

 

You can take a look at NAA output buffer fill level with Client. As long as it doesn't run empty you don't get drop-outs.

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
11 hours ago, Miska said:

 

It wouldn't have any impact on the latency. I don't know what that SyncE is, but NAA connection is asynchronous. More like your usual internet streaming. Intention is that it works nicely even over not so good WiFi connections, etc. With a huge FIFO buffer.

 

 

Note that RAVENNA and NAA are as different as possible. RAVENNA is passing audio clocks over network, etc. While NAA is not transferring any clocks, but driven solely by the DAC's hardware clock.

 

You can take a look at NAA output buffer fill level with Client. As long as it doesn't run empty you don't get drop-outs.

 

 

Intention is that it works nicely even over not so good WiFi connections, etc. With a huge FIFO buffer.

 

You can take a look at NAA output buffer fill level with Client. As long as it doesn't run empty you don't get drop-outs.

 

But I truly believe that these points need further investigation.

Maybe there is a way to program the NAA FiFo even better so low frequency jitter has less chance to bleed threw. I am still not very sure if it’s a problem of the NAA Software buffer or a PCB ground plain problem if the Opticalrendu. But fact is, that it doesn’t matter what you do on the primary side of the NAA, it is clearly audible. Changing Network cards, Switches, Clocks in Switches, PS’s in switches…….It’s all audible. And it’s not just me experiencing that phenomena.

Dear Mica this is not critic on our software !!!
It is the will to help explore the problems and of digital playback and join in in solving the problems.

Link to comment
  • 4 weeks later...
On 7/29/2024 at 5:20 AM, El Guapo said:

In my mind the most luxury way to connect the HQP and NAA is

 

[HQP server daemon] <-- RDMA --> [NIC] <-- SyncE (timestamped + QoS) --> [NIC] <-- RDMA --> [NAA endpoint daemon]

 

I think it would improve some µs of latency and reduce jitter when streaming 10+ channels MCH DSD.

 

So far I tested Receive Side Scaling + Receive Packet Steering + Receive Flow Steering on NAA side. It did help buffering stability.

Before setting RSS+RPS+RFS on my SoC NAA - RAV bridge, Anubis' receiving buffer looked ziggy-zaggy

image.thumb.jpeg.7214f52d5c26a60066e6f8b1a7d66733.jpeg

 

After setting RSS+RPS+RFS on my SoC NAA - RAV bridge, Anubis' receiving buffer looked smoother

image.thumb.jpeg.a91b000e78a003b4dc8321de52b23bee.jpeg

 

I think smoother buffering could reduce the control intervention. Less control and less interruption are always good for audio streaming.

What SoC NAA - RAV bridge do you have ?

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...